metadata matters | it's all about the services pagetitle: metadata matters it's all about the services blog about archives log in schnellnavigation: jump to start of page | jump to posts | jump to navigation it’s not just me that’s getting old having just celebrated (?) another birthday at the tail end of , the topics of age and change have been even more on my mind than usual. and then two events converged. first i had a chat with ted fons in a hallway at midwinter, and he asked about using an older article i’d published with karen coyle way back in early (“resource description and access (rda): cataloging rules for the th century”). the second thing was a message from research gate that reported that the article in question was easily the most popular thing i’d ever published. my big worry in terms of having ted use that article was that rda had experienced several sea changes in the nine (!) years since the article was published (jan./feb. ), so i cautioned ted about using it. then i decided i needed to reread the article and see whether i had spoken too soon. the historic rationale holds up very well, but it’s important to note that at the time that article was written, the jsc (now the rsc) was foundering, reluctant to make the needed changes to cut ties to aacr . the quotes from the cc:da illustrate how deep the frustration was at that time. there was a real turning point looming for rda, and i’d like to believe that the article pushed a lot of people to be less conservative and more emboldened to look beyond the cataloger tradition. in april of , a mere few months from when this article came out, ala publishing arranged for the famous “london meeting” that changed the course of rda. gordon dunsire and i were at that meeting–in fact it was the first time we met. i didn’t even know much about him aside from his article in the same dlib issue. as it turns out, the rda article was elevated to the top spot, thus stealing some of his thunder, so he wasn’t very happy with me. the decision made in london to allow dcmi to participate by building the vocabularies was a game changer, and gordon and i were named co-chairs of a task group to manage that process. so as i re-read the article, i realized that the most important bits at the time are probably mostly of historical interest at this point. i think the most important takeaway is that rda has come a very long way since , and in some significant ways is now leading the pack in terms of its model and vocabulary management policies (more about that to come). and i still like the title! …even though it’s no longer a true description of the st century rda. by diane hillmann, february , , : am (utc- ) rda, uncategorized post a comment denying the non-english speaking world not long ago i encountered the analysis of bibframe published by rob sanderson with contributions by a group of well-known librarians. it’s a pretty impressive document–well organized and clearly referenced. but in fact there’s also a significant amount of personal opinion in it, the nature of which is somewhat masked by the references to others holding the same opinion. i have a real concern about some of those points where an assertion of ‘best practices’ are particularly arguable. the one that sticks in my craw particularly shows up in section . . : . . use natural keys in uris references: [manning], [ldbook], [gld-bp], [cooluris] although the client must treat uris as opaque strings, it is good practice to construct uris in a systematic and human readable fashion for both instances and ontology terms. a natural key is one that appears in the information about the resource, such as some unique identifier for the resource, or the label of the property for ontology terms. while the machine does not care about structure, memorability or readability of uris, the developers that write the code do. completely random uris introduce difficult to detect semantic and algorithmic errors in both publication and consumption of the data. analysis: the use of natural keys is a strength of bibframe, compared to similarly scoped efforts in similar communities such as the rda and cidoc-crm vocabularies which use completely opaque numbers such as p (hasrespondent) or e (linguistic entity). rda further misses the target in this area by going on to define multiple uris for each term with language tagged labels in the uri, such as rda:hasrespondent.en mapping to p . this is a different predicate from the numerical version, and using owl:sameas to connect the two just makes everyone’s lives more difficult unnecessarily. in general, labels for the predicates and classes should be provided in the ontology document, along with thorough and understandable descriptions in multiple languages, not in the uri structure. this sounds fine so long as you accept the idea that ‘natural’ means english, because, of course, all developers, no matter their first language, must be fluent enough in english to work with english-only standards and applications. this mis-use of ‘natural’ reminds me of other problematic usages, such as the former practice in the adoption community (of which i have been a part for years) where ‘natural’ was routinely used to refer to birth parents, thus relegating adoptive parents to the ‘un-natural’ realm. so in this case, if ‘natural’ means english, are all other languages inherently un-natural in the world of development? the library world has been dominated by the ‘anglo-american’ notions of standard practice for a very long time, and happily, rda is leading away from that, both in governance and in development of vocabularies and tools. the multilingual strategy adopted by rda is based on the following points: more than a decade of managing vocabularies has convinced us that opaque identifiers are extremely valuable for managing uris, because they need not be changed as labels change (only as definitions change). the kinds of ‘churn’ we saw in the original version of rda ( - ) convinced us that label-based uris were a significant problem (and cost) that became worse as the vocabularies grew over time. we get the argument that opaque uris are often difficult for humans to use–but the tools we’re building (the rda registry as case in point) are intended to give human developers what they want for their tasks (human readable uris, in a variety of languages) but ensure that the uris for properties and values are set up based on what machines need. in this way, changes in the lexical uris (human-readable) can be maintained properly without costly change in the canonical uris that travel with the data content itself. the multiple language translations (and distributed translation management by language communities) also enable humans to build discovery and display mechanisms for users that are speakers of a variety of languages. this has been a particularly important value for national libraries outside the us, but also potentially for libraries in the us meeting the needs of non-english language communities closer to home. it’s too easy for the english-first library development community to insist that uris be readable in english and to turn a blind eye to the degree that this imposes understanding of the english language and anglo-american library culture on the rest of the world. this is not automatically the intellectual gift that the distributors of that culture assume it to be. it shouldn’t be necessary for non-anglo-american catalogers to learn and understand anglo-american language and culture in order to express metadata for a non-anglo audience. this is the rough equivalent of the philadelphia cheese steak vendor who put up a sign reading “this is america. when ordering speak in english”. we understand that for english-speaking developers bibframe.org/vocab/title is initially easier to use than rdaregistry.info/elements/w/p or even (heaven forefend!) “ _ #$a” (in rdf: marc rdf.info/elements/ xx/m _a). that’s why rda provides rdaregistry.info/elements/w/titleofthework.en but also, eventually, rdaregistry.info/elements/w/拥有该作品的标题.ch and rdaregistry.info/elements/w/tienetítulodelaobra.es, et al (you do understand latin of course). these ‘unnatural’ lexical aliases will be provided by the ‘native’ language speakers of their respective national library communities. as one of the many thousands of librarians who ‘speak’ marc to one another–despite our language differences–i am loathe to give up that international language to an english-only world. that seems like a step backwards. by diane hillmann, january , , : pm (utc- ) bibframe, linked data, rda, vocabularies comment (show inline) review of: draft principles for evaluating metadata standards metadata standards is a huge topic and evaluation a difficult task, one i’ve been involved in for quite a while. so i was pretty excited when i saw the link for “draft principles for evaluating metadata standards”, but after reading it? not so much. if we’re talking about “principles” in the sense of ‘stating-the-obvious-as-a-first-step’, well, okay—but i’m still not very excited. i do note that the earlier version link uses the title ‘draft checklist’, and i certainly think that’s a bit more real than ‘draft principles’ for this effort. but even taken as a draft, the text manages to use lots of terms without defining them—not a good thing in an environment where semantics is so important. let’s start with a review of the document itself, then maybe i can suggest some alternative paths forward. first off, i have a problem with the preamble: “these principles are intended for use by libraries, archives and museum (lam) communities for the development, maintenance, governance, selection, use and assessment of metadata standards. they apply to metadata structures (field lists, property definitions, etc.), but can also be used with content standards and value vocabularies”. those tasks (“development, maintenance, governance, selection, use and assessment” are pretty all encompassing, but yet the connection between those tasks and the overall “evaluation” is unclear. and, of course, without definitions, it’s difficult to understand how ‘evaluation’ relates to ‘assessment’ in this context—are they they same thing? moving on to the second part about what kind of metadata standards that might be evaluated, we have a very general term, ‘metadata structures’, with what look to be examples of such structures (field lists, property definitions, etc.). some would argue (including me) that a field list is not a structure without a notion of connections between the fields; and although property definitions may be part of a ‘structure’ (as i understand it, at least), they are not a structure, per se. and what is meant by the term ‘content standards’, and how is that different from ‘metadata structures’? the term ’value vocabularies’ goes by many names, and is not something that can go without a definition. i say this as an author/co-author of a lot of papers that use this term, and we always define it within the context of the paper for just that reason. there are many more places in the text where fuzziness in terminology is a problem (maybe not a problem for a checklist, but certainly for principles). some examples: . what is meant by ’network’? there are many different kinds, and if you mean to refer to the internet, for goodness sakes say so. ‘things’ rather than ‘strings’ is good, but it will take a while to make it happen in legacy data, which we’ll be dealing with for some time, most likely forever. prospectively created data is a bit easier, but still not a cakewalk — if the ‘network’ is the global internet, then “leveraging ‘by-reference’ models” present yet-to-be-solved problems of network latency, caching, provenance, security, persistence, and most importantly: stability. metadata models for both properties and controlled values are an essential part of lam systems and simply saying that metadata is “most efficient when connected with the broader network” doesn’t necessarily make it so. . ‘open’ can mean many things. are we talking specific kinds of licenses, or the lack of a license? what kind of re-use are you talking about? extension? wholesale adoption with namespace substitution? how does semantic mapping fit into this? (in lieu of a definition, see the paper at ( ) below) . this principle seems to imply that “metadata creation” is the sole province of human practitioners and seriously muddies the meaning of the word creation by drawing a distinction between passive system-created metadata and human-created metadata. metadata is metadata and standards apply regardless. what do you mean by ‘benefit user communities’? whose communities? please define what is meant by ‘value’ in this context? how would metadata practitioners ‘dictate the level of description provided based on the situation at hand’? . as an evaluative ‘principle’ this seems overly vague. how would you evaluate a metadata standard’s ability to ‘easily’ support ‘emerging’ research? what is meant by ‘exchange/access methods’ and what do they have to do with metadata standards for new kinds of research? . i agree totally with the sentence “metadata standards are only as valuable and current as their communities of practice,” but the one following makes little sense to me. “ … metadata in lam institutions have been very stable over the last years …” really? it could easily be argued that the reason for that perceived stability is the continual inability of implementers to “be a driving force for change” within a governance model that has at the same time been resistant to change. the existence of the dcmi usage board, marbi, the various boards advising the rda steering committee, all speak to the involvement of ‘implementers’. yet there’s an implication in this ‘principle’ that stability is liable to no longer be the case and that implementers ‘driving’ will somehow make that inevitable lack of stability palatable. i would submit that stability of the standard should be the guiding principle rather than the democracy of its governance. . “extensible, embeddable, and interoperable” sounds good, but each is more complex than this triumvirate seems. interoperability in particular is something that we should all keep in mind, but although admirable, interoperability rarely succeeds in practice because of the practical incompatibility of different models. dc, marc , bibframe, rda, and schema.org are examples of this — despite their ‘modularity’ they generally can’t simply be used as ‘modules’ because of differences in the thinking behind the model and their respective audiences. i would also argue that ‘lite style implementations’ make sense only if ‘lite’ means a dumbed-down core that can be mapped to by more detailed metadata. but stressing the ‘lite implementations’ as a specified part of an overall standard gives too much power to the creator of the standard, rather than the creator of the data. instead we should encourage the use of application profiles, so that the particular choices and usages of the creating entity are well documented, and others can use the data in full or in part according to their needs. i predict that lossy data transfer will be less acceptable in the reality than it is in the abstract, and would suggest that dumb data is more expensive over the longer term (and certainly doesn’t support ‘new research methods’ at all). “incorporation into local systems” really can only be accomplished by building local systems that adhere to their own local metadata model and are able to map that model in/out to more global models. extensible and embeddable are very different from interoperable in that context. . the last section, after the inarguable first sentence, describes what the dcmi ‘dumb-down’ principle defined nearly twenty years ago, and that strategy still makes sense in a lot of situations. but ‘graceful degradation’ and ‘supporting new and unexpected uses’ requires smart data to start with. ‘lite’ implementation choices (as in # above) preclude either of those options, imo, and ‘adding value’ of any kind (much less by using ‘ontological inferencing’) is in no way easily achievable. i intend to be present at the session in boston [ : - : boston conference and exhibition center, ab] and since i’ve asked most of my questions here i intend not to talk much. let’s see how successful i can be at that! it may well be that a document this short and generalized isn’t yet ready to be a useful tool for metadata practitioners (especially without definitions!). that doesn’t mean that the topics that it’s trying to address aren’t important, just that the comprehensive goals in the preamble are not yet being met in this document. there are efforts going on in other arenas–the niso bibliography roadmap work, for instance, that should have an important impact on many of these issues, which suggests that it might be wise for the committee to pause and take another look around. maybe a good glossary would be a important step? dunsire, gordon, et al. “a reconsideration of mapping in a semantic world”, paper presented at international conference on dublin core and metadata applications, the hague, . available at: dcpapers.dublincore.org/pubs/article/view/ / by diane hillmann, december , , : pm (utc- ) ala conferences, systems, vocabularies comment (show inline) the jane-athons continue! the jane-athon series is alive, well, and expanding its original vision. i wrote about the first ‘official’ jane-athon earlier this year, after the first event at midwinter . since then the excitement generated at the first one has spawned others: the ag-athon in the uk in may , sponsored by cilip the maurice dance in new zealand (october , at the national library of new zealand in wellington, focused on maurice gee) the jane-in (at ala san francisco at annual ) the rls-athon (november , , edinburgh, scotland), following the jsc meeting there and focused on robert louis stevenson like good librarians we have an archive of the jane-athon materials, for use by anyone who wants to look at or use the presentations or the data created at the jane-athons we’re still at it: the next jane-athon in the series will be the boston thing-athon at harvard university on january , . looking at the list of topics gives a good idea about how the jane-athons are morphing to a broader focus than that of a creator, while training folks to create data with rimmf. the first three topics (which may change–watch this space) focus not on specific creators, but on moving forward on topics identified of interest to a broader community. * strings vs things. a focus on replacing strings in metadata with uris for things. * institutional repositories, archives and scholarly communication. a focus on issues in relating and linking data in institutional repositories and archives with library catalogs. * rare materials and rda. a continuing discussion on the development of rda and dcrm begun at the jsc meeting and the international seminar on rda and rare materials held in november . for beginners new to rda and rimmf but with an interest in creating data, we offer: * digitization. a focus on how rda relates metadata for digitized resources to the metadata for original resources, and how rimmf can be used to improve the quality of marc records during digitization projects. * undergraduate editions. a focus on issues of multiple editions that have little or no change in content vs. significant changes in content, and how rda accommodates the different scenarios. further on the horizon is a recently approved jane-athon for the aall conference in july , focusing on hugo grotius (inevitably, a hugo-athon, but there’s no link yet). note: the thing-a-thon coming up at ala midwinter is being held on thursday rather than the traditional friday to open the attendance to those who have other commitments on friday. another new wrinkle is the venue–an actual library away from the conference center! whether you’re a cataloger or not-a-cataloger, there will be plenty of activities and discussions that should pique your interest. do yourself a favor and register for a fun and informative day at the thing-athon to begin your midwinter experience! instructions for registering (whether or not you plan to register for mw) can be found on the toolkit blog. by diane hillmann, december , , : am (utc- ) uncategorized post a comment separating ideology, politics and utility those of you who pay attention to politics (no matter where you are) are very likely to be shaking your head over candidates, results or policy. it’s a never ending source of frustration and/or entertainment here in the u.s., and i’ve noticed that the commentators seem to be focusing in on issues of ideology and faith, particularly where it bumps up against politics. the visit of pope francis seemed to be taking everyone’s attention while he was here, but though this event has added some ‘green’ to the discussion, it hasn’t pushed much off the political plate. politics and faith bump up against each other in the metadata world, too. what with traditionalists still thinking in marc tags and aacr , to the technical types rolling their eyes at any mention of marc and trying to push the conversation towards rda, rdf, bibframe, schema.org, etc., there are plenty of metadata politics available to flavor the discussion. the good news for us is that the conflicts and differences we confront in the metadata world are much more amenable to useful solution than the politics crowding our news feeds. i remember well the days when the choice of metadata schema was critical to projects and libraries. unfortunately, we’re all still behaving as if the proliferation of ‘new’ schemas makes the whole business more complicated–that’s because we’re still thinking we need to choose one or another, ignoring the commonality at the core of the new metadata effort. but times have changed, and we don’t all need to use the same schema to be interoperable (just like we don’t all need to speak english or esperanto to communicate). but what we do need to think about is what the needs of our organization are at all stages of the workflow: from creating, publishing, consuming, through integrating our metadata to make it useful in the various efforts in which we engage. one thing we do need to consider as we talk about creating new metadata is whether it will need to work with other data that already exists in our institution. if marc is what we have, then one requirement may be to be able to maintain the level of richness we’ve built up in the past and still move that rich data forward with us. this suggests to me that rda, which rimmf has demonstrated can be losslessly mapped to and from marc, might be the best choice for the creation of new metadata. back in the day, when dublin core was the shiny new thing, the notion of ‘dumb-down’ was hatched, and though not an elegantly named principle, it still works. it says that rich metadata can be mapped fairly easily into a less-rich schema (‘dumbed down’), but once transformed in a lossy way, it can’t easily be ‘smartened up’. but in a world of many publishers of linked data, and many consumers of that data, the notion of transforming rich metadata into any number of other schemas and letting the consumer chose what they want, is fairly straightforward, and does not require firm knowledge (or correct assumptions) of what the consumers actually need. this is a strategy well-tested with oai-pmh which established a floor of simple dublin core but encouraged the provision of any number of other formats as well, including marc. as consumers, libraries and other cultural institutions are also better served by choices. depending on the services they’re trying to support, they can choose what flavor of data meets their needs best, instead of being offered only what the provider assumes they want. this strategy leaves open the possibility of serving marc as one of the choices, allowing those institutions still nursing an aged ils to continue to participate. of course, the consumers of data need to think about how they aggregate and integrate the data they consume, how to improve that data, and how to make their data services coherent. that’s the part of the new create, publish, consume, integrate cycle that scares many librarians, but it shouldn’t–really! so, it’s not about choosing the ‘right’ metadata format, it’s about having a fuller and more expansive notion about sharing data and learning some new skills. let’s kiss the politics goodbye, and get on with it. by diane hillmann, october , , : am (utc- ) linked data, rda, vocabularies comment (show inline) semantic versioning and vocabularies a decade ago, when the open metadata registry (omr) was just being developed as the nsdl registry, the vocabulary world was a very different place than it is today. at that point we were tightly focussed on skos (not fully cooked at that point, but jon was on the wg that was developing it, so we felt pretty secure diving in). but we were thinking about versioning in the open world of rdf even then. the nsdl registry kept careful track of all changes to a vocabulary (who, what, when) and the only way to get data in was through the user interface. we ran an early experiment in making versions based on dynamic, timestamp-based snapshots (we called them ‘time slices’, git calls them ‘commit snapshots’) available for value vocabularies, but this failed to gain any traction. this seemed to be partly because, well, it was a decade ago for one, and while it attempted to solve an open world problem with versioned uris, it created a new set of problems for closed world experimenters. ultimately, we left the versions issue to sit and stew for a bit ( years!). all that started to change in as we started working with rda, and needed to move past value vocabularies into properties and classes, and beyond that into issues around uploading data into the omr. lately, git and github have started taking off and provide a way for us to make some important jumps in functionality that have culminated in the omr/github-based rda registry. sounds easy and intuitive now, but it sure wasn’t at the time, and what most people don’t know is that the omr is still where rda/rdf data originates — it wasn’t supplanted by git/github, but is chugging along in the background. the omr’s rdf cms is still visible and usable by all, but folks managing larger vocabularies now have more options. one important aspect of the use of git and github was the ability to rethink versioning. just about a year ago our paper on this topic (versioning vocabularies in a linked data world, by diane hillmann, gordon dunsire and jon phipps) was presented to the ifla satellite meeting in paris. we used as our model the way software on our various devices and systems is updated–more and more these changes happen without much (if any) interaction with us. in the world of vocabularies defining the properties and values in linked data, most updating is still very manual (if done at all), and the important information about what has changed and when is often hidden behind web pages or downloadable files that provide no machine-understandable connections identifying changes. and just solving the change management issue does little to solve the inevitable ‘vocabulary rot’ that can make published ‘linked data’ less and less meaningful, accurate, and useful over time. building stable change management practices is a very critical missing piece of the linked data publishing puzzle. the problem will grow exponentially as language versions and inter-vocabulary mappings start to show up as well — and it won’t be too long before that happens. please take a look at the paper and join in the conversation! by diane hillmann, september , , : pm (utc- ) rda, tools, vocabularies post a comment five star vocabulary use most of us in the library and cultural heritage communities interested in metadata are well aware of tim berners-lee’s five star ratings for linked open data (in fact, some of us actually have the mug). the five star rating for lod, intended to encourage us to follow five basic rules for linked data is useful, but, as we’ve discussed it over the years, a basic question rises up: what good is linked data without (property) vocabularies? vocabulary manager types like me and my peeps are always thinking like this, and recently we came across solid evidence that we are not alone in the universe. check out: “five stars of linked data vocabulary use”, published last year as part of the semantic web journal. the five authors posit that tbl’s five star linked data is just the precondition to what we really need: vocabularies. they point out that the original star rating says nothing about vocabularies, but that linked data without vocabularies is not useful at all: “just converting a csv file to a set of rdf triples and linking them to another set of triples does not necessarily make the data more (re)usable to humans or machines.” needless to say, we share this viewpoint! i’m not going to steal their thunder and list here all five star categories–you really should read the article (it’s short), but only note that the lowest level is a zero star rating that covers ld with no vocabularies. the five star rating is reserved for vocabularies that are linked to other vocabularies, which is pretty cool, and not easy to accomplish by the original publisher as a soloist. these five star ratings are a terrific start to good practices documentation for vocabularies used in lod, which we’ve had in our minds for some time. stay tuned. by diane hillmann, august , , : pm (utc- ) linked data, vocabularies post a comment what do we mean when we talk about ‘meaning’? over the past weekend i participated in a twitter conversation on the topic of meaning, data, transformation and packaging. the conversation is too long to repost here, but looking from july - for @metadata_maven should pick most of it up. aside from my usual frustration at the message limitations in twitter, there seemed to be a lot of confusion about what exactly we mean about ‘meaning’ and how it gets expressed in data. i had a skype conversation with @jonphipps about it, and thought i could reproduce that here, in a way that could add to the original conversation, perhaps clarifying a few things. [probably good to read the twitter conversation ahead of reading the rest of this.] jon phipps: i think the problem that the people in that conversation are trying to address is that marc has done triple duty as a local and global serialization (format) for storage, supporting indexing and display; a global data interchange format; and a focal point for creating agreement about the rules everyone is expected to follow to populate the data (aacr , rda). if you walk away from that, even if you don’t kill it, nothing else is going to be able to serve that particular set of functions. but that’s the way everyone chooses to discuss bibframe, or schema.org, or any other ‘marc replacement’. diane hillmann: yeah, but how does ‘meaning’ merely expressed on a wiki page help in any way? isn’t the idea to have meaning expressed with the data itself? jon phipps: it depends on whether you see rdf as a meaning transport mechanism or a data transport mechanism. that’s the difference between semantic data and linked data. diane hillmann: it’s both, don’t you think? jon phipps: semantic data is the smart subset of linked data. diane hillmann: nice tagline jon phipps: zepheira, and now dc, seem to be increasingly looking at rdf as merely linked data. i should say a transport mechanism for ‘linked’ data. diane hillmann: it’s easier that way. jon phipps: exactly. basically what they’re saying is that meaning is up to the receiver’s system to determine. dc:title of ‘mr.’ is fine in that world–it even validates according to the ‘new’ ap thinking. it’s all easier for the data producers if they don’t have to care about vocabularies. but the value of rdf is that it’s brilliantly designed to transport knowledge, not just data. rdf data is intended to live in a world where any thing can be described by any thing, and all of those descriptions can be aggregated over time to form a more complete description of the thing being described. knowledge transfer really benefits from semantic web concepts like inferences and entailments and even truthiness (in addition to just validation). if you discount and even reject those concepts in a linked data world than you might as well ship your data around as csv or even sql files and be done with it. one of the things about marc is that it’s incredibly semantically rich (marc rdf.info) and has also been brilliantly designed by a lot of people over a lot of years to convey an equally rich body of bibliographic knowledge. but throwing away even a small portion of that knowledge in pursuit of a far dumber linked data holy grail is a lot like saying that since most people only use a relatively limited number of words (especially when they’re texting) we have no need for a , word, or even a , word, dictionary. marc makes knowledge transfer look relatively easy because the knowledge is embedded in a vocabulary every cataloger learns and speaks fairly fluently. it looks like it’s just a (truly limiting) data format so it’s easy to think that replacing it is just a matter of coming up with a fresh new format, like rdf. but it’s going to be a lot harder than that, which is tacitly acknowledged by the many-faceted effort to permanently dumb-down bibliographic metadata, and it’s one of the reasons why i think bibframe.org, bibfra.me, and schema.org might end up being very destructive, given the way they’re being promoted (be sure to park your marc somewhere). [that’s why we’re so focused on the rda data model (which can actually be semantically richer than marc), why we helped create marc rdf.info, and why we’re working at building out our rdf vocabulary management services.] diane hillmann: this would be a great conversation to record for a podcast 😉 jon phipps: i’m not saying proper vocabulary management is easy. look at us for instance, we haven’t bothered to publish the omr vocabs and only one person has noticed (so far). but they’re in active use in every omr-generated vocab. the point i was making was that we we’re no better, as publishers of theoretically semantic metadata, at making sure the data was ‘meaningful’ by making sure that the vocabs resolved, had definitions, etc. [p.s. we’re now working on publishing our registry vocabularies.] by diane hillmann, july , , : pm (utc- ) linked data, rda, vocabularies comment (show inline) fresh from ala, what’s new? in the old days, when i was on marbi as liaison for aall, i used to write a fairly detailed report, and after that wrote it up for my cornell colleagues. the gist of those reports was to describe what happened, and if there might be implications to consider from the decisions. i don’t propose to do that here, but it does feel as if i’m acting in a familiar ‘reporting’ mode. in an early saturday presentation sponsored by the linked library data ig, we heard about bibframe and vivo. i was very interested to see how vivo has grown (having seen it as an infant), but was puzzled by the suggestion that it or foaf could substitute for the functionality embedded in authority records. for one thing, auth records are about disambiguating names, and not describing people–much as some believe that’s where authority control should be going. even when we stop using text strings as identifiers, we’ll still need that function and should be thinking carefully whether adding other functions makes good sense. later on saturday, at the cataloging norms ig meeting, nancy fallgren spoke on the nlm collaboration with zepheira, gw, (and others) on bibframe lite. they’re now testing the kuali ole cataloging module for use with bf lite, which will include a triple store. an important quote from nancy: “legacy data should not drive development.” so true, but neither should we be starting over, or discarding data, just to simplify data creation, thus losing the ability to respond to the more complex needs in cataloging, which aren’t going away, (a point demonstrated usefully in the recent jane-athons). i was the last speaker on that program, and spoke on the topic of “what can we do about our legacy data?” i was primarily asking questions and discussing options, not providing answers. the one thing i am adamant about is that nobody should be throwing away their marc records. i even came up with a simple rule: “park the marc”. after all, storage is cheap, and nobody really knows how the current situation will settle out. data is easy to dumb down, but not so easy to smarten up, and there may be do-overs in store for some down the road, after the experimentation is done and the tradeoffs clearer. i also attended the bibframe update, and noted that there’s still no open discussion about the ‘classic’ (as in ‘classic coke’) bibframe version used by lc, and the ‘new’ (as in ‘new coke’) bibframe lite version being developed by zepheira, which is apparently the vocabulary they’re using in their projects and training. it seems like it could be a useful discussion, but somebody’s got to start it. it’s not gonna be me. the most interesting part of that update from my point of view was hearing sally mccallum talk about the testing of bibframe by lc’s catalogers. the tool they’re planning on using (in development, i believe) will use rda labels and include rule numbers from the rda toolkit. now, there’s a test i really want to hear about at midwinter! but of course all of that rda ‘testing’ they insisted on several years ago to determine if the rda rules could be applied to marc doesn’t (can’t) apply to bibframe classic so … will there be a new round of much publicized and eagerly anticipated shared institutional testing of this new tool and its assumptions? just askin’. by diane hillmann, july , , : am (utc- ) ala conferences, bibframe, rda, vocabularies post a comment what’s up with this jane-athon stuff? the rda development team started talking about developing training for the ‘new’ rda, with a focus on the vocabularies, in the fall of . we had some notion of what we didn’t want to do: we didn’t want yet another ‘sage on the stage’ event, we wanted to re-purpose the ‘hackathon’ model from a software focus to data creation (including a major hands-on aspect), and we wanted to demonstrate what rda looked like (and could do) in a native rda environment, without reference to marc. this was a tall order. using rimmf for the data creation was a no-brainer: the developers had been using the rda registry to feed new vocabulary elements into their their software (effectively becoming the rda registry’s first client), and were fully committed to frbr. deborah fritz had been training librarians and other on rimmf for years, gathering feedback and building enthusiasm. it was deborah who came up with the jane-athon idea, and the rda development group took it and ran with it. using the jane austen theme was a brilliant part of deborah’s idea. everybody knows about ja, and the number of spin offs, rip-offs and re-tellings of the novels (in many media formats) made her work a natural for examining why rda and frbr make sense. one goal stated everywhere in the marketing materials for our first jane outing was that we wanted people to have fun. all of us have been part of the audience and on the dais for many information sessions, for rda and other issues, and neither position has ever been much fun, useful as the sessions might have been. the same goes for webinars, which, as they’ve developed in library-land tend to be dry, boring, and completely bereft of human interaction. and there was a lot of fun at that first jane-athon–i venture to say that % of the folks in the room left with smiles and thanks. we got an amazing response to our evaluation survey, and the preponderance of responses were expansive, positive, and clearly designed to help the organizers to do better the next time. the various folks from ala publishing who stood at the back and watched the fun were absolutely amazed at the noise, the laughter, and the collaboration in evidence. no small part of the success of jane-athon rested with the team leaders at each table, and the coaches going from table to table helping out with puzzling issues, ensuring that participants were able to create data using rimmf that could be aggregated for examination later in the day. from the beginning we thought of jane as the first of many. in the first flush of success as participants signed up and enthusiasm built, we talked publicly about making it possible to do local jane-athons, but we realized that our small group would have difficulty doing smaller events with less expertise on site to the same standard we set at jane-athon . we had to do a better job in thinking through the local expansion and how to ensure that local participants get the same (or similar) value from the experience before responding to requests. as a step in that direction cilip in the uk is planning an ag-athon on may , which will add much to the collective experience as well as to the data store that began with the first jane-athon and will be an increasingly important factor as we work through the issues of sharing data. the collection and storage of the jane-athon data was envisioned prior to the first event, and the r-balls site was designed as a place to store and share rimmf-based information. though a valuable step towards shareable rda data, rballs have their limits. the data itself can be curated by human experts or available with warts, depending on the needs of the user of the data. for the longer term, rimmf can output rdf statements based on the rball info, and a triple store is in development for experimentation and exploration. there are plans to improve the visualization of this data and demonstrate its use at jane-athon in san francisco, which will include more about rda and linked data, as well as what the created data can be used for, in particular, for new and improved services. so, what are the implications of the first jane-athon’s success for libraries interested in linked data? one of the biggest misunderstandings floating around libraryland in linked data conversations is that it’s necessary to make one and only one choice of format, and eschew all others (kind of like saying that everyone has to speak english to participate in lod). this is not just incorrect, it’s also dangerous. in the marc era, there was truly no choice for libraries–to participate in record sharing they had to use marc. but the technology has changed, and rapidly evolving semantic mapping strategies [see: dcpapers.dublincore.org/pubs/article/view/ ] will enable libraries to use the most appropriate schemas and tools for creating data to be used in their local context, and others for distributing that data to partners, collaborators, or the larger world. another widely circulated meme is that rda/frbr is ‘too complicated’ for what libraries need; we’re encouraged to ‘simplify, simplify’ and assured that we’ll still be able to do what we need. hmm, well, simplification is an attractive idea, until one remembers that the environment we work in, with evolving carriers, versions, and creative ideas for marketing materials to libraries is getting more complex than ever. without the specificity to describe what we have (or have access to), we push the problem out to our users to figure out on their own. libraries have always tried to be smarter than that, and that requires “smart” , not “dumb”, metadata. of course the corollary to the ‘too complicated’ argument lies the notion that a) we’re not smart enough to figure out how to do rda and frbr right, and b) complex means more expensive. i refuse to give space to a), but b) is an important consideration. i urge you to take a look at the jane-athon data and consider the fact that jane austen wrote very few novels, but they’ve been re-published with various editions, versions and commentaries for almost two centuries. once you add the ‘based on’, ‘inspired by’ and the enormous trail created by those trying to use jane’s popularity to sell stuff (“sense and sensibility and sea monsters” is a favorite of mine), you can see the problem. think of a pyramid with a very expansive base, and a very sharp point, and consider that the works that everything at the bottom wants to link to don’t require repeating the description of each novel every time in rda. and we’re not adding notes to descriptions that are based on the outdated notion that the only use for information about the relationship between “sense and sensibility and sea monsters” and jane’s “sense and sensibility” is a human being who looks far enough into the description to read the note. one of the big revelations for most jane-athon participants was to see how well rimmf translated legacy marc records into rda, with links between the wem levels and others to the named agents in the record. it’s very slick, and most importantly, not lossy. consider that rimmf also outputs in both marc and rdf–and you see something of a missing link (if not the golden gate bridge :-). not to say there aren’t issues to be considered with rda as with other options. there are certainly those, and they’ll be discussed at the jane-in in san francisco as well as at the rda forum on the following day, which will focus on current rda upgrades and the future of rda and cataloging. (more detailed information on the forum will be available shortly). don’t miss the fun, take a look at the details and then go ahead and register. and catalogers, try your best to entice your developers to come too. we’ll set up a table for them, and you’ll improve the conversation level at home considerably! by diane hillmann, may , , : am (utc- ) linked data, rda, uncategorized comment (show inline) older articles » schnellnavigation: jump to start of page | jump to posts | jump to navigation syndication rdf articles rss articles atom articles archives february january december october september august july may february december november october september february december july may october august july june may april march december september april march february january october september august july june april march february january november august july may april march february january december november categories ala conferences ( ) bibframe ( ) dublin core ( ) futures ( ) legislative data project ( ) linked data ( ) marc in rdf ( ) meeting reports ( ) presentations ( ) rda ( ) tools ( ) systems ( ) teaching ( ) uncategorized ( ) vocabularies ( ) april s m t w t f s « feb «-»     search search the archive latest comments denying the non-english speaking world   karen coyle what do we mean when we talk about ‘meaning’?   owen stephens why are we waiting for the ils to change?   diane hillmann, dan scott mapping without taggregations   gordon dunsire, kathleen lamantia if we were asked   eddie f. fitzgerald, chuck getting to higher marc branches   gordon dunsire, karen coyle blogroll bibliographic wilderness catalogablog cataloging futures celeripedean coyle’s information go to hellman inkdroid litablog lorcan dempsey’s weblog metadata blog metalogue the frbr blog the registry blog thingology virtual dave … real blog weibel lines linkroll buttons schnellnavigation: jump to start of page | jump to posts | jump to navigation metadata matters is powered by wordpress v . . and binary blue v . . hanging together hanging together the oclc research blog a research roadmap for building a national finding aid network (nafan) i had the opportunity to develop and lead the oclc project team in the user research design phase for the imls-funded project (grant number lg- -ols- ), building a national finding aid &# ; the post a research roadmap for building a national finding aid network (nafan) appeared first on hanging together. how well does ead tag usage support finding aid discovery? in november, we shared information with you about the building a&# ;national finding aid network&# ;project (nafan). this is a&# ;two-year research and demonstration project to build the foundation for a (us) national &# ; the post how well does ead tag usage support finding aid discovery? appeared first on hanging together. social interoperability: getting to know all about you building and sustaining productive cross-campus partnerships in support of the university research enterprise is both necessary and fraught with challenges. social interoperability – the creation and maintenance of working relationships &# ; the post social interoperability: getting to know all about you appeared first on hanging together. reimagine descriptive workflows: meeting the challenges of inclusive description in shared infrastructure in a previous blog post, i told you about our reimagine descriptive workflows project, and the path we took to get there. in that post, i shared the three objectives &# ; the post reimagine descriptive workflows: meeting the challenges of inclusive description in shared infrastructure appeared first on hanging together. fill your social interoperability toolbox social interoperability – the creation and maintenance of working relationships across individuals and organizational units that promote collaboration, communication, and mutual understanding – is important. but it can be hard &# ; the post fill your social interoperability toolbox appeared first on hanging together. filling our cups at the home office water cooler a few weeks ago i learned that crystal, an oclc colleague i barely knew, loves to play board games, that she and her husband&# ; just became empty nesters (like my &# ; the post filling our cups at the home office water cooler appeared first on hanging together. working across campus is like herding flaming cats do you work at a university? if so, did you know that you work in a complex, adaptive system? and did you know that that makes it hard to build &# ; the post working across campus is like herding flaming cats appeared first on hanging together. the realm project: what we’ve learned and what’s next when the realm project began in april , little was known about the covid- virus. now, more than a year later, realm has completed eight laboratory tests, synthesized emerging research &# ; the post the realm project: what we&# ;ve learned and what&# ;s next appeared first on hanging together. ‘reimagine descriptive workflows’ in libraries and archives in the oclc research library partnership, we have a practice of “learning together” – we listen for issues that form a shared challenge space within our global partnership, then we &# ; the post ‘reimagine descriptive workflows’ in libraries and archives appeared first on hanging together. join us for the total cost of stewardship / #oclc_tcos twitter chat! on may th from noon- pm pdt / - pm edt / - pm bst, we are hosting a twitter chat inspired by our recent report total cost of stewardship: responsible collection building &# ; the post join us for the total cost of stewardship / #oclc_tcos twitter chat! appeared first on hanging together. the code lib journal the code lib journal editorial: closer to than to with the publication of issue , the code lib journal is now closer to issue than we are to issue . also, we are developing a name change policy. adaptive digital library services: emergency access digitization at the university of illinois at urbana-champaign during the covid- pandemic this paper describes how the university of illinois at urbana-champaign library provided access to circulating library materials during the covid- pandemic. specifically, it details how the library adapted existing staff roles and digital library infrastructure to offer on-demand digitization of and limited online access to library collection items requested by patrons working in a remote teaching and learning environment. the paper also provides an overview of the technology used, details how dedicated staff with strong local control of technology were able to scale up a university-wide solution, reflects on lessons learned, and analyzes nine months of usage data to shed light on library patrons’ changing needs during the pandemic. assessing high-volume transfers from optical media at nypl nypl’s workflow for transferring optical media to long-term storage was met with a challenge: an acquisition of a collection containing thousands of recordable cds and dvds. many programs take a disk-by-disk approach to imaging or transferring optical media, but to deal with a collection of this size, nypl developed a workflow using a nimbie autoloader and a customized version of kbnl’s open-source iromlab software to batch disks for transfer. this workflow prioritized quantity, but, at the outset, it was difficult to tell if every transfer was as accurate as it could be. we discuss the process of evaluating the success of the mass transfer workflow, and the improvements we made to identify and troubleshoot errors that could occur during the transfer. a background of the institution and other institutions’ approaches to similar projects is given, then an in-depth discussion of the process of gathering and analyzing data. we finish with a discussion of our takeaways from the project. better together: improving the lives of metadata creators with natural language processing dc public library has long held digital copies of the full run of local alternative weekly, washington city paper, but had no official status as a rights grantor to enable use. that recently changed due to a full agreement being reached with the publisher. one condition of that agreement, however, was that issues become available with usable descriptive metadata and subject access in time to celebrate the upcoming th anniversary of the publication, which at that time was in six months. one of the most time intensive tasks our metadata specialists work on is assigning description to digital objects. this paper details how we applied python’s natural language toolkit and openrefine’s reconciliation functions to the collection’s ocr text to simplify subject selection for staff with no background in programming. choose your own educational resource: developing an interactive oer using the ink scripting language learning games are games created with the purpose of educating, as well as entertaining, players. this article describes the potential of interactive fiction (if), a type of text-based game, to serve as learning games. after summarizing the basic concepts of interactive fiction and learning games, the article describes common interactive fiction programming languages and tools, including ink, a simple markup language that can be used to create choice based text games that play in a web browser. the final section of the article includes code putting the concepts of ink, interactive fiction, and learning games into action using part of an interactive oer created by the author in december of . enhancing print journal analysis for shared print collections the western regional storage trust (west), is a distributed shared print journal repository program serving research libraries, college and university libraries, and library consortia in the western region of the united states. west solicits serial bibliographic records and related holdings biennially, which are evaluated and identified as candidates for shared print archiving using a complex collection analysis process. california digital library’s discovery &# ; delivery west operations team (west-ops) supports the functionality behind this collection analysis process used by west program staff (west-staff) and members. for west, proposals for shared print archiving have been historically predicated on what is known as an ulrich’s journal family, which pulls together related serial titles, for example, succeeding and preceding serial titles, their supplements, and foreign language parallel titles. ulrich’s, while it has been invaluable, proves problematic in several ways, resulting in the approximate omission of half of the journal titles submitted for collection analysis. part of west’s effectiveness in archiving hinges upon its ability to analyze local serials data across its membership as holistically as possible. the process that enables this analysis, and subsequent archiving proposals, is dependent on ulrich’s journal family, for which issn has been traditionally used to match and cluster all related titles within a particular family. as such, the process is limited in that many journals have never been assigned issns, especially older publications, or member bibliographic records may lack an issn(s), though the issn may exist in an oclc primary record. building a mechanism for matching on issns that goes beyond the base set of primary, former, and succeeding titles, expands the number of eligible issns that facilitate ulrich’s journal family matching. furthermore, when no matches in ulrich’s can be made based on issn, other types of control numbers within a bibliographic record may be used to match with records that have been previously matched with an ulrich’s journal family via issn, resulting in a significant increase in the number of titles eligible for collection analysis. this paper will discuss problems in ulrich’s journal family matching, improved functional methodologies developed to address those problems, and potential strategies to improve in serial title clustering in the future. how we built a spatial subject classification based on wikidata from the fall of to the beginning of a project had been carried out to upgrade spatial subject indexing in north rhine-westphalian bibliography (nwbib) from uncontrolled strings to controlled values. for this purpose, a spatial classification with around , entries was created from wikidata and published as skos (simple knowledge organization system) vocabulary. the article gives an overview over the initial problem and outlines the different implementation steps. institutional data repository development, a moving target at the end of , the research data service (rds) at the university of illinois at urbana-champaign (uiuc) completed its fifth year as a campus-wide service. in order to gauge the effectiveness of the rds in meeting the needs of illinois researchers, rds staff developed a five-year review consisting of a survey and a series of in-depth focus group interviews. as a result, our institutional data repository developed in-house by university library it staff, illinois data bank, was recognized as the most useful service offering by our unit. when launched in , storage resources and web servers for illinois data bank and supporting systems were hosted on-premises at uiuc. as anticipated, researchers increasingly need to share large, and complex datasets. in a responsive effort to leverage the potentially more reliable, highly available, cost-effective, and scalable storage accessible to computation resources, we migrated our item bitstreams and web services to the cloud. our efforts have met with success, but also with painful bumps along the way. this article describes how we supported data curation workflows through transitioning from on-premises to cloud resource hosting. it details our approaches to ingesting, curating, and offering access to dataset files up to tb in size--which may be archive type files (e.g., .zip or .tar) containing complex directory structures. on the nature of extreme close-range photogrammetry: visualization and measurement of north african stone points image acquisition, visualization, and measurement are examined in the context of extreme close-range photogrammetric data analysis. manual measurements commonly used in traditional stone artifact investigation are used as a starting point to better gauge the usefulness of high-resolution d surrogates and the flexible digital tool sets that can work with them. the potential of various visualization techniques are also explored in the context of future teaching, learning, and research in virtual environments. optimizing elasticsearch search experience using a thesaurus the belgian art links and tools (balat) (http://balat.kikirpa.be/) is the continuously expanding online documentary platform of the royal institute for cultural heritage (kik-irpa), brussels (belgium). balat contains over , images of kik-irpa’s unique collection of photo negatives on the cultural heritage of belgium, but also the library catalogue, pdfs of articles from kik-irpa’s bulletin and other publications, an extensive persons and institutions authority list, and several specialized thematic websites, each of those collections being multilingual as belgium has three official languages. all these are interlinked to give the user easy access to freely available information on the belgian cultural heritage. during the last years, kik-irpa has been working on a detailed and inclusive data management plan. through this data management plan, a new project hescida (heritage science data archive) will upgrade balat to balat+, enabling access to searchable registries of kik-irpa datasets and data interoperability. balat+ will be a building block of digilab, one of the future pillars of the european research infrastructure for heritage science (e-rihs), which will provide online access to scientific data concerning tangible heritage, following the fair-principles (findable-accessible-interoperable-reusable). it will include and enable access to searchable registries of specialized digital resources (datasets, reference collections, thesauri, ontologies, etc.). in the context of this project, elasticsearch has been chosen as the technology empowering the search component of balat+. an essential feature of this search functionality of balat+ is the need for linguistic equivalencies, meaning a term query in french should also return the matching results containing the equivalent term in dutch. another important feature is to offer a mechanism to broaden the search with elements of more precise terminology: a term like "furniture" could also match records containing chairs, tables, etc. this article will explain how a thesaurus developed in-house at kik-irpa was used to obtain these functionalities, from the processing of that thesaurus to the production of the configuration needed by elasticsearch. pythagoras: discovering and visualizing musical relationships using computer analysis this paper presents an introduction to pythagoras, an in-progress digital humanities project using python to parse and analyze xml-encoded music scores. the goal of the project is to use recurring patterns of notes to explore existing relationships among musical works and composers. an intended outcome of this project is to give music performers, scholars, librarians, and anyone else interested in digital humanities new insights into musical relationships as well as new methods of data analysis in the arts. editorial resuming our publication schedule managing an institutional repository workflow with gitlab and a folder-based deposit system institutional repositories (ir) exist in a variety of configurations and in various states of development across the country. each organization with an ir has a workflow that can range from explicitly documented and codified sets of software and human workflows, to ad hoc assortments of methods for working with faculty to acquire, process and load items into a repository. the university of north texas (unt) libraries has managed an ir called unt scholarly works for the past decade but has until recently relied on ad hoc workflows. over the past six months, we have worked to improve our processes in a way that is extensible and flexible while also providing a clear workflow for our staff to process submitted and harvested content. our approach makes use of gitlab and its associated tools to track and communicate priorities for a multi-user team processing resources. we paired this web-based management with a folder-based system for moving the deposited resources through a sequential set of processes that are necessary to describe, upload, and preserve the resource. this strategy can be used in a number of different applications and can serve as a set of building blocks that can be configured in different ways. this article will discuss which components of gitlab are used together as tools for tracking deposits from faculty as they move through different steps in the workflow. likewise, the folder-based workflow queue will be presented and described as implemented at unt, and examples for how we have used it in different situations will be presented. customizing alma and primo for home & locker delivery like many ex libris libraries in fall , our library at california state university, northridge (csun) was not physically open to the public during the - academic year, but we wanted to continue to support the research and study needs of our over , university students and , faculty and staff. this article will explain our alma and primo implementation to allow for home mail delivery of physical items, including policy decisions, workflow changes, customization of request forms through labels and delivery skins, customization of alma letters, a python solution to add the “home” address type to patron addresses to make it all work, and will include relevant code samples in python, xsl, css, xml, and json. in spring , we will add the on-site locker delivery option in addition to home delivery, and this article will include new system changes made for that option. ganch: using linked open data for georgia’s natural, cultural and historic organizations’ disaster response in june , the atlanta university center robert w. woodruff library received a lyrasis catalyst fund grant to support the creation of a publicly editable directory of georgia’s natural, cultural and historical organizations (nchs), allowing for quick retrieval of location and contact information for disaster response. by the end of the project, over , entries for nch organizations in georgia were compiled, updated, and uploaded to wikidata, the linked open data database from the wikimedia foundation. these entries included directory contact information and gis coordinates that appear on a map presented on the ganch project website (https://ganch.auctr.edu/), allowing emergency responders to quickly search for nchs by region and county in the event of a disaster. in this article we discuss the design principles, methods, and challenges encountered in building and implementing this tool, including the impact the tool has had on statewide disaster response after implementation. archive this moment d.c.: a case study of participatory collecting during covid- when the covid- pandemic brought life in washington, d.c. to a standstill in march , staff at dc public library began looking for ways to document how this historic event was affecting everyday life. recognizing the value of first-person accounts for historical research, staff launched archive this moment d.c. to preserve the story of daily life in the district during the stay-at-home order. materials were collected from public instagram and twitter posts submitted through the hashtag #archivethismomentdc. in addition to social media, creators also submitted materials using an airtable webform set up for the project and through email. over , digital files were collected. this article will discuss the planning, professional collaboration, promotion, selection, access, and lessons learned from the project; as well as the technical setup, collection strategies, and metadata requirements. in particular, this article will include a discussion of the evolving collection scope of the project and the need for clear ethical guidelines surrounding privacy when collecting materials in real-time. advancing arks in the historical ontology space this paper presents the application of archival resource keys (arks) for persistent identification and resolution of concepts in historical ontologies. our use case is the library of congress subject headings (lcsh), which we have converted to the simple knowledge organization system (skos) format and will use for representing a corpus of historical encyclopedia britannica articles. we report on the steps taken to assign arks in support of the nineteenth-century knowledge project, where we are using the hive vocabulary tool to automatically assign subject metadata from both the lcsh and the contemporary lcsh faceted, topical vocabulary to enable the study of the evolution of knowledge. considered content: a design system for equity, accessibility, and sustainability the university of minnesota libraries developed and applied a principles-based design system to their health sciences library website. with the design system at its center, the revised site was able to achieve accessible, ethical, inclusive, sustainable, responsible, and universal design. the final site was built with elegantly accessible semantic html-focused code on drupal with highly curated and considered content, meeting and exceeding wcag . aa guidance and addressing cognitive and learning considerations through the use of plain language, templated pages for consistent page-level organization, and no hidden content. as a result, the site better supports all users regardless of their abilities, attention level, mental status, reading level, and reliability of their internet connection, all of which are especially critical now as an elevated number of people experience crises, anxieties, and depression. robustifying links to combat reference rot links to web resources frequently break, and linked content can change at unpredictable rates. these dynamics of the web are detrimental when references to web resources provide evidence or supporting information. in this paper, we highlight the significance of reference rot, provide an overview of existing techniques and their characteristics to address it, and introduce our robust links approach, including its web service and underlying api. robustifying links offers a proactive, uniform, and machine-actionable way to combat reference rot. in addition, we discuss our reasoning and approach aimed at keeping the approach functional for the long term. to showcase our approach, we have robustified all links in this article. machine learning based chat analysis the byu library implemented a machine learning-based tool to perform various text analysis tasks on transcripts of chat-based interactions between patrons and librarians. these text analysis tasks included estimating patron satisfaction and classifying queries into various categories such as research/reference, directional, tech/troubleshooting, policy/procedure, and others. an accuracy of % or better was achieved for each category. this paper details the implementation details and explores potential applications for the text analysis tool. always be migrating at the university of california, los angeles, the digital library program is in the midst of a large, multi-faceted migration project. this article presents a narrative of migration and a new mindset for technology and library staff in their ever-changing infrastructure and systems. this article posits that migration from system to system should be integrated into normal activities so that it is not a singular event or major project, but so that it is a process built into the core activities of a unit. editorial: for pandemic times such as this a pandemic changes the world and changes libraries. open source tools for scaling data curation at qdr this paper describes the development of services and tools for scaling data curation services at the qualitative data repository (qdr). through a set of open-source tools, semi-automated workflows, and extensions to the dataverse platform, our team has built services for curators to efficiently and effectively publish collections of qualitatively derived data. the contributions we seek to make in this paper are as follows: . we describe ‘human-in-the-loop’ curation and the tools that facilitate this model at qdr; . we provide an in-depth discussion of the design and implementation of these tools, including applications specific to the dataverse software repository, as well as standalone archiving tools written in r; and . we highlight the role of providing a service layer for data discovery and accessibility of qualitative data. keywords: data curation; open-source; qualitative data from text to map: combing named entity recognition and geographic information systems this tutorial shows readers how to leverage the power of named entity recognition (ner) and geographic information systems (gis) to extract place names from text, geocode them, and create a public-facing map. this process is highly useful across disciplines. for example, it can be used to generate maps from historical primary sources, works of literature set in the real world, and corpora of academic scholarship. in order to lead the reader through this process, the authors work with a article sample of the covid- open research dataset challenge (cord- ) dataset. as of the date of writing, cord- includes , full-text articles with metadata. using this sample, the authors demonstrate how to extract locations from the full-text with the spacy library in python, highlight methods to clean up the extracted data with the pandas library, and finally teach the reader how to create an interactive map of the places using arcgis online. the processes and code are described in a manner that is reusable for any corpus of text using integrated library systems and open data to analyze library cardholders the harrison public library in westchester county, new york operates two library buildings in harrison: the richard e. halperin memorial library building (the library’s main building, located in downtown harrison) and a west harrison branch location. as part of its latest three-year strategic plan, the library sought to use existing resources to improve understanding of its cardholders at both locations. to do so, we needed to link the circulation data in our integrated library system, evergreen, to geographic data and demographic data. we decided to build a geodemographic heatmap that incorporated all three aforementioned types of data. using evergreen, american community survey (acs) data, and google maps, we plotted each cardholder’s residence on a map, added census boundaries (called tracts) and our town’s borders to the map, and produced summary statistics for each tract detailing its demographics and the library card usage of its residents. in this article, we describe how we acquired the necessary data and built the heatmap. we also touch on how we safeguarded the data while building the heatmap, which is an internal tool available only to select authorized staff members. finally, we discuss what we learned from the heatmap and how libraries can use open data to benefit their communities. update oclc holdings without paying additional fees: a patchwork approach accurate oclc holdings are vital for interlibrary loan transactions. however, over time weeding projects, replacing lost or damaged materials, and human error can leave a library with a catalog that is no longer reflected through oclc. while oclc offers reclamation services to bring poorly maintained collections up-to-date, the associated fee may be cost prohibitive for libraries with limited budgets. this article will describe the process used at austin peay state university to identify, isolate, and update holdings using oclc collection manager queries, marcedit, excel, and python. some portions of this process are completed using basic coding; however, troubleshooting techniques will be included for those with limited previous experience. data reuse in linked data projects: a comparison of alma and share-vde bibframe networks this article presents an analysis of the enrichment, transformation, and clustering used by vendors casalini libri/@cult and ex libris for their respective conversions of marc data to bibframe. the analysis considers the source marc data used by alma then the enrichment and transformation of marc data from share-vde partner libraries. the clustering of linked data into a bibframe network is a key outcome of data reuse in linked data projects and fundamental to the improvement of the discovery of library collections on the web and within search systems. collectionbuilder-contentdm: developing a static web ‘skin’ for contentdm-based digital collections unsatisfied with customization options for contentdm, librarians at university of idaho library have been using a modern static web approach to creating digital exhibit websites that sit in front of the digital repository. this "skin" is designed to provide users with new pathways to discover and explore collection content and context. this article describes the concepts behind the approach and how it has developed into an open source, data-driven tool called collectionbuilider-contentdm. the authors outline the design decisions and principles guiding the development of collectionbuilder, and detail how a version is used at the university of idaho library to collaboratively build digital collections and digital scholarship projects. automated collections workflows in gobi: using python to scrape for purchase options the nc state university libraries has developed a tool for querying gobi, our print and ebook ordering vendor platform, to automate monthly collections reports. these reports detail purchase options for missing or long-overdue items, as well as popular items with multiple holds. gobi does not offer an api, forcing staff to conduct manual title-by-title searches that previously took up to hours per month. to make this process more efficient, we wrote a python script that automates title searches and the extraction of key data (price, date of publication, binding type) from gobi. this tool can gather data for hundreds of titles in half an hour or less, freeing up time for other projects. this article will describe the process of creating this script, as well as how it finds and selects data in gobi. it will also discuss how these results are paired with nc state’s holdings data to create reports for collection managers. lastly, the article will examine obstacles that were experienced in the creation of the tool and offer recommendations for other organizations seeking to automate collections workflows. testing remote access to e-resource with codeceptjs at the badische landesbibliothek karlsruhe (blb) we offer a variety of e-resources with different access requirements. on the one hand, there is free access to open access material, no matter where you are. on the other hand, there are e-resources that you can only access when you are in the rooms of the blb. we also offer e-resources that you can access from anywhere, but you must have a library account for authentication to gain access. to test the functionality of these access methods, we have created a project to automatically test the entire process from searching our catalogue, selecting a hit, logging in to the provider's site and checking the results. for this we use the end end testing framework codeceptjs. none none events – data science by design home events blog grants & anthology code of conduct about events overview of dsxd events dsxd creator conference may & , - pacific, - eastern apply to attend: here. applications now closed. about do you have a data story to tell but need help broadening your audience? do you wish you had some time and support to exercise the creative side of your data-loving brain? on may and th, we are hosting a a conference spread over two half days ( hours each day), where we will come together to get over that activation energy to start (or finish) creative data-related projects. we’ll learn about others’ creative processes and storytelling techniques and work through design exercises that will push us to get creating. we’ll hear from expert storytellers, creators, and designers about how they: brainstorm and find inspiration, get started bringing an idea to life, continue to make progress and refine an idea, pitch ideas to get buy-in from others, and share their final products with a broad audience. as we learn, we’ll be coming up with our own ideas and thinking about what resources and support we need from this creator community to bring them to fruition. and we’ll be doodling all the while. this conference is part of our data science by design (dsxd) initiative aimed to bring together data enthusiasts of all kinds to use creative mediums to communicate data-related work and establish new collaborations across domains. the creator conference’s role is to kick off the eventual creation of personal essays, drawings, explainers, or how-to guides on research best practices, findings, methodology, or even work culture. featuring joining us in the sessions will be researchers, educators, artists, computer scientists, and more! below is a sneak peek into the amazing crew of organizers, panelists, session leads, and speakers who will be joining us this year. full schedule here. ijeamaka anyene she/her data analyst and data viz rebecca barter she/her researcher, data scientist, and data viz anna e. cook she/her accessibility designer erin davis she/her data visualization and analysis frank elavsky he/him accessibility and data experience engineer julia evans she/her programmer and maker sharla gelfand they/them data wrangler, developer, and data viz allison horst she/her professor, artist, and researcher patricia kambitsch she/her visual storyteller and artist kevin koy he/him human-centered design x data science at ideo sean kross he/him researcher, educator, and developer giorgia lupi she/her information designer and partner at pentagram ciera martinez she/her researcher, biologist, and data scientist sarah mirk she/her writer, editor, and good ideas grishma rao she/her artist, writer, emerging technology designer tara robertson she/her equity and inclusion advocate and rogue librarian sara stoudt she/her professor, stats communicator chris walker he/him data x design valeri vasquez she/her researcher and data scientist peter winter he/him data as a creative medium for design shirley wu she/her data visualization designer, developer, and author we want you to attend! maybe you identify as a student, an educator, a researcher, a designer, an artist, an analyst, an engineer, or as something else entirely. all career stages - everyone is welcome! as organizers of data science by design, we are committed to fostering a supportive community among participants. one of our priorities for this event is to increase the amount of people who see themselves in data-related fields. therefore, we strongly encourage applications from women and other underrepresented genders, people of color, people who are lgbtq, people with disabilities or any other underrepresented minorities in data-related fields. to ensure an inclusive experience for everyone who participates, we will follow a code of conduct. apply to attend apply here: https://forms.gle/keclxc zaclyt js apply by may to ensure your application is reviewed. we want attendees to come with ideas. each applicant is asked to submit a short application that responds to at least two of the following. the application can feature creative mediums (e.g., a page of illustrations) but we ask that you do write out in english ( ) your full name ( ) your email address and ( ) what items below you are focusing on: pitch us. what project are you already working on or would you love to work on, given the time and resources, at the intersection of data science and the creative arts? see examples below! share with us. are there two examples of work at the intersection of data science and the creative arts that inspire/motivate/excite you? please include links or references, where applicable. think with us. what would your goals be coming out of this experience? what are the contributions you feel you’re best placed to make to enrich the experience of other participants? application help: project examples and scope the scope for the creator conf application includes anything relevant to communicating about data or working with data (e.g., best practices) using creative mediums. we are looking for people who want to work on projects that they are passionate about, including (but definitely not limited to!) data visualizations. below are some examples of project ideas that are within scope: format examples short essays or stories interviews websites tutorials data story/presentations (data viz, sonification, sculpture, etc.) zine software library or package or something else! topic examples exploration of a dataset how to create community in data, computer science or related fields personal narratives about working in or wanting to work in data, computer science or related fields data science workflows concepts explained/taught diversity and inclusion computing challenges algorithmic bias accessibility burn out best practices or something else! questions? contact us at datasciencebydesign@gmail.com schedule   thursday, may friday, may time (pacific) title and brief description speakers / leads time (pacific) title and brief description speakers / leads : am [all zones] welcome to dsxd creator conf! dsxd team : am [all zones] day opener dsxd team : am [all zones] future of data science discussion with live sketching how do we envision the field of data science moving forward? in this conversation we invite everyone to share what we want more of (or less of!) in the field of data and computational sciences. featuring live sketching from artist patricia kambitsch, with advice on how she approaches effective and creative notetaking. ciera martinez and patricia kambitsch : am [all zones] coffee and collabs grab a coffee and . introduce yourself . talk about your dream project and . get help and maybe even find some summer collaborators! if you don't want to chat on camera go to #projects slack channel and post / react to others projects. dsxd team : am [all zones] break   : am [all zones] break   : am [all zones] visual storytellers panel conversation on the practice and process of communicating complex information visually. moderator: sharla gelfand panelists: julia evans, giorgia lupi, and shirley wu : am [all zones] the pitching process (w/ activity) going from “thinking about thinking about it” to “thinking about it” to actually “doing it” speakers: erin davis lead: sara stoudt : am [all zones] break   : pm [all zones] break   : pm [all zones] creativity to learn, creativity to teach (w/ activity) how can we engage students (and ourselves) to get a broader audience excited about data tools and techniques? leads: sean kross, allison horst, and sara stoudt : : pm [all zones] self publishing and the power of zines (w/ activity) you don't need permission to publish. learn about an old-school approach to open source information sharing: zines! we'll learn how to make a one-page zine, admire zines from around the world, and do a short creative exercise to each create a data-driven zine. lead: sara mirk : pm [all zones] break   : pm [all zones] break   : pm [all zones] designing for accessibility and inclusivity short talks and discussion on how to instill inclusion and accessibility practices into how we work. lead: valeri vasquez speakers: anna cook, frank elavsky, and tara robertson : pm [all zones] dsxd summer to create next steps, future projects, and funding opportunities with dsxd dsxd team : pm [all zones] day closing dsxd team     . © argon design system by creative tim & jekyll themes home events blog grants & anthology code of conduct about none none free range librarian free range librarian k.g. schneider's blog on librarianship, writing, and everything else (dis)association i have been reflecting on the future of a national association i belong to that has struggled with relevancy and with closing the distance between itself and its members, has distinct factions that differ on fundamental matters of values, faces declining national and chapter membership, needs to catch up on the technology curve, has sometimes [&# ;] i have measured out my life in doodle polls you know that song? the one you really liked the first time you heard it? and even the fifth or fifteenth? but now your skin crawls when you hear it? that&# ;s me and doodle. in the last three months i have filled out at least a dozen doodle polls for various meetings outside my organization. [&# ;] memento dmv this morning i spent minutes in the appointment line at the santa rosa dmv to get my license renewed and converted to real id, but was told i was “too early” to renew my license, which expires in september, so i have to return after i receive my renewal notice. i could have converted [&# ;] an old-skool blog post i get up early these days and get stuff done &# ; banking and other elder-care tasks for my mother, leftover work from the previous day, association or service work. a lot of this is writing, but it&# ;s not writing. i have a half-dozen unfinished blog posts in wordpress, and even more in my mind. i [&# ;] keeping council editorial note: over half of this post was composed in july . at the time, this post could have been seen as politically neutral (where ala is the political landscape i&# ;m referring to) but tilted toward change and reform. since then, events have transpired. i revised this post in november, but at the time hesitated [&# ;] what burns away we are among the lucky ones. we did not lose our home. we did not spend day after day evacuated, waiting to learn the fate of where we live. we never lost power or internet. we had three or four days where we were mildly inconvenienced because pg&# ;e wisely turned off gas to many neighborhoods, [&# ;] neutrality is anything but &# ;we watch people dragged away and sucker-punched at rallies as they clumsily try to be an early-warning system for what they fear lies ahead.&# ; &# ; unwittingly prophetic me, march, . sometime after last november, i realized something very strange was happening with my clothes. my slacks had suddenly shrunk, even if i hadn&# ;t washed them. after [&# ;] mpow in the here and now i have coined a few biblioneologisms in my day, but the one that has had the longest legs is mpow (my place of work), a convenient, mildly-masking shorthand for one&# ;s institution. for the last four years i haven&# ;t had the bandwidth to coin neologisms, let alone write about mpow*. this silence could be misconstrued. i [&# ;] questions i have been asked about doctoral programs about six months ago i was visiting another institution when someone said to me, &# ;oh, i used to read your blog, back in the day.&# ; ah yes, back in the day, that pleistocene era when i wasn&# ;t working on a phd while holding down a big job and dealing with the rest of life&# ;s shenanigans. [&# ;] a scholar’s pool of tears, part : the pre in preprint means not done yet note, for two more days, january and , you (as in all of you) have free access to my article, to be real: antecedents and consequences of sexual identity disclosure by academic library directors. then it drops behind a paywall and sits there for a year. when i wrote part of this blog [&# ;] theodore dreiser papers, circa - (bulk dates - ) university of pennsylvania finding aids navigation aids skip to main content skip to main search skip to information about this record skip to select related items. use checkboxes to select any of the filters that apply to this item. penn libraries  •  repositories  •  penn back to full page university of pennsylvania finding aids search finding aids   sidebar select materials to view in reading room finding aids home information and contacts contact us contents for this finding aid summary information biography/history scope and contents administrative information collection inventory correspondence miscellaneous correspondence legal matters td writings: books td writings: essays td writings: short stories td writings: poems td writings: plays td writings: screenplays and radio scripts td writings: addresses, lectures, interviews td writings: introductions, prefaces journals edited by td notes written and compiled by td td diaries biographical material family members memorabilia financial records clippings works by others oversize clippings (originals for microfilm) appendices expand all collapse all powered by the dla view finding aidfind related items main content theodore dreiser papers ms. coll. this is a finding aid. it is a description of archival material held at the university of pennsylvania. unless otherwise noted, the materials described below are physically available in our reading room, and not digitally available through the web. summary information repository: university of pennsylvania: kislak center for special collections, rare books and manuscripts creator: dreiser, theodore, - title: theodore dreiser papers date: circa - (bulk dates - ) call number: ms. coll. extent: linear feet ( boxes) language: english abstract: contains series, including correspondence ( boxes); legal matters ( boxes); writings ( boxes), comprising books, essays, short stories, poems, plays, screenplays, radio scripts, addresses, lectures, interviews, introductions, and prefaces; journals edited by dreiser ( boxes); notes ( boxes); diaries ( boxes); biographical material ( box); memorabilia ( boxes), comprising scrapbooks, photographs (many of which are available online), art work, promotional material, postcards, and miscellanea; financial records ( boxes); clippings ( boxes); works by others ( boxes); and oversize materials ( boxes). also includes materials regarding various family members: brother paul dresser ( boxes of correspondence, sheet music and lyric sheets, clippings and memorabilia, and two plays written by dresser); second wife helen dreiser ( boxes of diaries and other writings); and niece vera dreiser ( boxes of correspondence). cite as: theodore dreiser papers, kislak center for special collections, rare books and manuscripts, university of pennsylvania finding aid's permanent url: http://hdl.library.upenn.edu/ /d/ead/upenn_rbml_mscoll pdf version: return to top » biography/history during the congress on literature at the chicago world's fair of , hamlin garland expressed america's need for a new kind of literature. garland called this new literature "veritism" and "local color"—something authentically american rather than derivative of europe. at the same time, twenty-two-year-old theodore dreiser was in chicago covering the world's fair as a reporter for the st. louis republic. although dreiser did not attend the congress on literature, he was to play a principal role in the fulfillment of garland's dream for american literature in the decades that followed. (herman) theodore dreiser ( - ) was born in terre haute, indiana on august . he was a sickly child, the ninth in a family of ten surviving children (three older boys had died in infancy). theodore's mother, sarah maria schänäb, of czech ancestry, was reared in the mennonite faith on a farm near dayton, ohio. his father, john paul dreiser, was a german immigrant, who left mayen in at the age of twenty-three to avoid conscription. he eventually traveled to america to follow his trade as a weaver, ending up at a mill in dayton, ohio, where he met the then seventeen-year-old sarah. john paul dreiser was a devout catholic, sarah schänäb, somewhat protestant and decidedly pagan in her approach to the world—she was extremely superstitious and romantic. the couple ran off together and married in , sarah not quite eighteen, john paul then twenty-nine. sarah was immediately disowned by her family, militant anti-catholics. the couple settled first in fort wayne, indiana and then in terre haute, where john paul became quite successful in the woolen business. there were six children in the family in when the dreisers moved to sullivan, indiana and john paul borrowed significantly in the hopes of becoming an independent wool manufacturer. these hopes were destroyed in when his factory burned to the ground. john paul was injured severely by falling timber as he tried to save his dream. by the time he recovered and moved his family back to terre haute, the dreisers were deep in debt, for john paul insisted on paying back every dollar that he owed. discouraged to the point of despair, he abandoned his career and became obsessed with religion and the salvation of his family. when theodore dreiser was born in , his family was settled firmly in the depths of poverty. there were eight older siblings: paul, marcus romanus (known as rome), mary frances (mame), emma, theresa, sylvia, al, and claire. younger brother ed would follow two years later. dreiser's father was only sporadically employed. the older children were out of the home, picking up what work they could, mostly getting into trouble. the family had a reputation in terre haute for being behind in their bills with wild sons and flirty daughters. each morning they knelt around the father as he asked for a blessing for the day, and there was a similar blessing each night. despite these prayers and stern punishments at the hand of john paul, it was too late. the older boys ran away from home; the older girls were involved in affairs. the dreiser family was out of control, abetted by sarah's leniency toward her children. young theodore dreiser grew up in this environment of uncertainty. he often went to bed hungry. there was no money for coal, and theodore would go with his older brother al to pick some up along the tracks of the railroad. his mother took in washing and worked at scrubbing and cleaning. always sensitive, theodore was humiliated to wear ragged clothing and to sneak coal from the tracks. he stuttered; he cried easily; he was a homely child, with protruding teeth and a cast in one eye. thin, pale, bullied by other boys, he spent his days alone for the most part. yet dreiser was also intensely curious about life, watching sunrises, observing birds in flight, exploring the indiana countryside. he hated his father's world of censored joy and authority and loved his mother's romantic dreams. dreiser realized that his family was poor and that they were looked down upon; he dreamed of having a home like those of the wealthy on wabash avenue, of having money and fine clothing. within theodore dreiser's harsh world of poverty there was always a contrasting element of the fantastic. first it was his mother's world of fancy—the family constantly moved at her whim, for she was always certain that something better was just over the horizon. as he grew older, the world of the wealthy town became his fantasy. then there was the fantastic success of his oldest brother, paul dreiser. paul had left home, joined a minstrel troupe, and achieved much success with his musical talents. writing, singing, and performing in minstrel shows, he even changed his name to paul dresser, which he felt would be more memorable to his public. when theodore was twelve he moved with his mother to chicago where his older sisters had secured an apartment. again there was the fantastic contrast of his old life in a small indiana town to the city, with its size, its activity, and its color. the ways of the city would continue to fascinate dreiser throughout his life. when the venture in chicago failed, theodore's mother moved him to warsaw, indiana, near where she had some land that had been left to her by her father. it was in warsaw that theodore first attended a non-catholic school. instead of the fear and trepidation of his earlier education, he found encouragement, first in the person of twenty-one-year old may calvert, his seventh grade teacher. miss calvert took an interest in theodore, encouraging him to use the local library and his imagination. she remained his life-long friend and confidant. at the age of seventeen, in a hardware store in chicago where theodore had found work, he met up with a former teacher, mildred fielding, now principal of a chicago high school. miss fielding had seen promise in him as well, thought him deserving, and wanted to send him to indiana university at her own expense. in the fall of dreiser arrived at the bloomington campus. dreiser spent only a year at indiana university. the experience showed him a world of possibilities, but he felt socially outcast and unsuccessful and was not really stimulated by any of his courses. theodore returned home, now almost nineteen years old, and found a job in a real estate office. he enjoyed some success in this field and gained a bit of confidence. that fall, however, his mother became ill. on november , theodore came home for lunch to find her in bed. as he helped her sit up, she went limp: sarah dreiser died in her son's arms at the age of fifty-seven. theodore, always his mother's favorite because he was so slight and sensitive, felt alone in the world. the dreiser family, only held together at this point by sarah's love for all, fell irreparably apart. theodore drifted into one job after another: driver for a laundry; collector for a furniture store. while these jobs provided him with an income, none allowed for the expression of ambition and artistic ability that he felt within. in his memoirs dreiser stated that it occurred to him at that time that newspaper reporters were men of importance and dignity, who by dint of interviewing the great were perceived their equal. it was now and theodore had returned to chicago, which was preparing for the upcoming world's fair and the democratic national convention. dreiser was curious enough about these events to write his own news stories about them, finding his to be as good as those published in the papers. in june of —after much determined footwork on his part—theodore dreiser landed a job on the chicago globe. dreiser's intense curiosity about life was well-suited to work as an investigative journalist. in chicago and later, in when he went to st. louis to work for the globe-democrat and the  republic, dreiser became known for his human interest pieces and "on-the-scene" reporting style: his articles were written in a manner that put the reader at the tragedy of a local fire or the action of a public debate. it was at the republic in that dreiser was given the job of escorting twenty female st. louis school teachers to the chicago world's fair and to write about their activities on the journey. one of these was sara osborne white, twenty-four and two years older than dreiser. she came from montgomery city, seventy-five miles west of st. louis. dreiser fell in love with her figure, dark eyes, and thick red hair (it was this last feature which led her friends and family to call her by the nickname "jug," for her hair was so thick around her face that it was said to resemble a red jug). dreiser, desiring her and aching for a chance to fulfill his always pressing sexual needs, took little time to propose. dreiser, however, was also driven by a desire for fame. his brother paul showed up in st. louis, and his talk of new york was alluring. theodore was ready for a change. a young reporter friend on the republic told him of a country weekly in his home town of grand rapids, ohio, which could be purchased for very little. dreiser thought that he could have great success on his own. in , with promises to send for jug soon, dreiser boarded a train for ohio. he arrived to find that the paper was small, with a subscribership of less than five hundred. the office was a shambles. there wasn't enough to it to even attempt to make a go, dreiser thought. he moved on to toledo, where he asked for a job from the city editor of the toledo blade, twenty-six year old arthur henry. the two men got along quite well, and henry found a few reporting assignments for dreiser. henry was an aspiring poet and novelist; dreiser was aspiring to be a playwright. the men spent hours in talk about their literary dreams. unfortunately, no permanent opening materialized at the  blade, and dreiser moved on to cleveland to look for work. after doing some feature work for the  leader, he moved to pittsburgh in the same year, where he immersed himself in research and articles concerning labor disputes that had culminated in the great strike of at homestead. from there he went to new york and received a job at pulitzer's paper,  the world, which was leading the fight in the yellow journalism war against hearst's  journal. he covered a streetcar strike in brooklyn by actually going out and riding the rails during the strike to see angry workers confronting scab drivers. he later incorporated these impressions into his first novel,  sister carrie. dreiser was drawn to the contrasts between the wealthy and the poverty stricken in new york. he quit his job at the world after only a few months, because he wasn't being allowed to produce the type of human interest stories that he thought should be told. he then lived, partly by choice and partly by necessity, on the streets of new york, where he took in the life of the downcast. at last he turned up at the new york offices of howley, haviland & company, the music publishing firm run by his brother paul and associates. he proposed to the men the idea of selling a magazine of popular songs, stories, and pictures. he would edit the magazine and it would help sell the company's songs. thus, in dreiser became "editor-arranger" for  ev'ry month, "the woman's magazine of literature and music." in addition to writing his own "reflections" column for each issue—in which he set forth his philosophies on such varied topics as the possibility of life on mars, working conditions in the sweat shops, yellow journalism, and the plight of new york's poor—dreiser also solicited syndicated stories by the better known american writers of his day, such as stephen crane and bret harte. after ev'ry month turned into a losing venture in , dreiser freelanced articles for various magazines. he was one of the original contributors to  success magazine, for which he interviewed the successful men of his time: andrew carnegie, marshall field, philip d. armour, thomas edison, and robert todd lincoln. as the twentieth century approached, dreiser wrote articles on the advances of technology, with titles like "the horseless age" and "the harlem river speedway" for some of the most popular magazines of the day, such as  leslie's,  munsey's,  ainslee's,  metropolitan,  cosmopolitan, and  demorest's. he compiled the first article ever written about alfred stieglitz, who seemed to combine in one dreiser's interest in art and technology. this writing set him in good straits financially. he now could afford to marry jug, a marriage that, in spite of second thoughts on his part, he undertook in a very small ceremony in washington, d.c., on december . the dreisers took up residence in new york, but in the summer of , at the request of arthur henry, made an extended visit to ohio. henry thought that it was time for dreiser to work on his fiction. together the two men spent the summer churning out articles and splitting the money that they earned fifty-fifty, thus giving each the time to work on his literary endeavors. it was here that dreiser began sister carrie. at the same time he became interested in the plight of workers in the south. he did a series of special articles for  pearson's magazine, which included investigations of a "model farm" in south carolina, delaware's "blue laws," and georgia's "chain gangs." all three dealt with society's punishment of those who transgressed, a theme that dreiser would investigate thoroughly in his novels. in addition, dreiser wrote six special articles on the inventor elmer gates, who had invested the money gained from his inventions on a facility for psychological research: it was called the elmer gates laboratory of psychology and psychurgy. gate's studies of learning, perception, the physiological effects of the emotions, and the will underlay the ways in which dreiser shaped hurstwood's actions in  sister carrie. journalism remained a steady source of income for dreiser throughout his life and supported his literary endeavors—he became a top editor for butterick's delineator in , a silent publisher of the  bohemian in , and in the s an editor of  the american spectator. the events that led up to the publication of  sister carrie in , however, began a new phase in dreiser's career—that of the heavily-edited novelist. before the book was published, dreiser was forced to change all names that could be attached to any existing firms or corporations. all "swearing" was to be removed. frank doubleday demanded that the novel have a more romantic title, and on the original contract the work bears the name "the flesh and the spirit," with dreiser's "sister carrie" penciled in beside it. editing was performed even after dreiser returned the author's proofs to doubleday, page & co. when frank doubleday read the final draft (after, by the way, page had already signed the contract with dreiser), he pronounced the book "immoral" and "badly written" and wanted to back out of its publication. dreiser held doubleday, page to its word, however, and  sister carrie was printed; but only , copies rolled off the presses, and of these remained unbound. it was not listed in the doubleday, page catalogue. the firm refused to advertise the work in any way. a london edition of  sister carrie (published in ), however, did well and was favorably reviewed. the  london daily mail said: "at last a really strong novel has come from america." dreiser would spend his entire literary career struggling with editors, publishers, and various political agencies, all of whom desired to make his works "suitable for the public." although dreiser began his second novel, jennie gerhardt ( ), upon completion of  sister carrie, his intense dissatisfaction with the changes and complaints that the publishers had made, combined with the treatment that  sister carrie was receiving, caused him to lose his health and delayed completion of  jennie gerhardt for nearly ten years. in dreiser, along with h. l. mencken, fought against the new york society for the suppression of vice when its president, john sumner, forced withdrawal of  the "genius" (published in ) from bookstore shelves. the fight dragged on through , and  the "genius" remained in storerooms until , when it was re-issued by horace liveright. in liveright was to become involved in dreiser's biggest battle for freedom of literary expression, when dreiser's an american tragedy ( ), the story of the chester gillette-grace brown murder case, was banned in boston. clarence darrow was a witness for the defense. the case lingered in the courts, at great expense to both dreiser and the liveright firm. between beginning the writing of the "genius" and publishing  an american tragedy, dreiser was prolific. he published the first two novels in his cowperwood trilogy,  the financier ( ) and  the titan ( ); a book of travel articles entitled  a traveler at forty ( ); a collection of plays,  plays of the natural and supernatu ral ( ); and a travelogue of his experiences on a car trip through his home state of indiana,  a hoosier holiday ( ). these were followed with  free and other stories in ;  twelve men in ;  the hand of the potter (a tragedy in four acts) also in ;  hey rub-a-dub-dub in ;  a book about myself, ; and  the color of a great city in . in the meantime, dreiser was beginning a third phase in his career, champion of freedom in all aspects of life. he made his first trip to europe in , and in london he picked up a prostitute and cross-examined her about life. he visited the house of commons and was sickened by the slums of the east end. this experience, combined with a seeming inferiority complex on his part at the self-assurance apparently inborn in the british caused dreiser to developed a life-long hatred of the british and may have had something to do with his sympathy for germany during world war i. back home in the united states he tried to organize a society to subsidize art and championed the causes of oppressed artists like himself. after the publication of an american tragedy, dreiser was more highly sought after by political organizations than before. in , while visiting europe, he commented on the events occurring in germany: "can one indict an entire people?" the answer, he felt, was yes. in dreiser was invited to the u.s.s.r. by the soviet government. the soviets thought that dreiser's opinion of their nation would have weight in america and that he would be favorable to their system of government (dreiser's books sold well in the soviet union). during the visit dreiser met with soviet heads of state, russian literary critics, movie directors, and even bill haywood, former american labor leader. dreiser kept extensive journals of the trip. he approved of the divorce of religion from the state, praised new schools and hospitals, but was repelled by the condition of hundreds of stray children scattered about the country. in dreiser visited london, where he met with winston churchill, with whom he discussed russia's social and military importance. he also took time to criticize the working conditions of mill workers in england. dreiser escalated these political involvements throughout his life. he helped bring former hungarian premier count michael károly to the united states after the communist takeover in . during the s he addressed protest rallies on behalf of tom mooney, whom he visited in san quentin, where mooney was serving a term for his alleged participation in a bombing incident in san francisco. dreiser met with sir rabindranath tagore in to discuss the success of the soviet government and the hopes of india. in dreiser cooperated with the international labor defense organization and took an active part in the social reform program of the american writers' league, of which he would later become president. in , as chairman of the national committee for the defense of political prisoners, dreiser organized a special committee to infiltrate kentucky's harlan coal mines to investigate allegations of crimes and abuses against striking miners. dreiser's life was threatened for calling attention to the matter. dreiser, john dos passos, and others on the "dreiser committee," as it was called, were indicted by the bell county grand jury for criminal syndicalism, and a warrant was issued for dreiser's arres t. franklin d. roosevelt, governor of new york at the time, said he would grant dreiser an open hearing, and john w. davis agreed to defend the committee. due to widespread publicity and public sentiment, however, all formal charges against dreiser and the committee were dropped. dreiser became even more involved with social reform after this incident. in he met with members of the communist party in the united states. dreiser criticized the u. s. communist party for being too disorganized. that year he was invited to write for a new literary magazine that would be free of advertising, the american spectator. dreiser became and remained associate editor of the paper until other editors agreed to accept advertising, at which point he resig ned. in dreiser attended an international peace conference in paris, because he was interested in the outcome of the spanish civil war. when he returned from europe, he visited with president roosevelt to discuss the problem and to try to influence him to send aid to spain. in dreiser again traveled to washington, d.c. and to new york to lecture for the committee for soviet friendship and american peace mobilization. he published pamphlets at his own expense and radio addresses. he publishe d  america is worth saving, a work concerning economics and intended to convince americans to avoid involvement in world war ii. in , just before his death, dreiser joined the communist party to signify his protest again st america's involvement in the war. during these years, dreiser was still publishing—articles, poems, pamphlets, leaflets, and novels. in he brought out an edition of poetry, moods: cadenced and declaimed.  chains followed in , a book of short stories and "lesser novels." other works include:  dreiser looks at russia ( );  the carnegie works at pittsburgh ( );  a gallery of women ( );  my city ( );  fine furniture ( );  dawn ( );  tragic america ( ); and  america is worth saving ( ). in addition, dreiser was working on several things at the time of his death, some of which were published posthumously:  the bulwark ( );  the stoic ( ); and a philosophical and scie ntific treatise that would later be edited and published by marguerite tjader and john j. mcaleer and titled  notes on life ( ). there were many sides to theodore dreiser, beyond his literary and political efforts. he was greatly interested in scientific research and development; he collected a great many books and much information on the latest scientific concerns. in he met jacques loeb of the rockefeller institute and visited the marine biological laboratory in woods hole, massachusetts. later visits to the mt. wilson observatory in california and the california institute of technology would impress him greatly. he had a longstanding correspondence with dr. a. a. brill, psychologist, who was largely responsible for introducing jungian and freudian analysis to new york. he also championed the works of charles fort, a "free-thinker" who was determined to establish that science was "unscientific" and that his own vision of the universe as a place where "anything could happen and did" (swanberg, ) was the correct one. dreiser was particularly fascinated with genetics, which he felt explored the true "mysteries of life." in , he attended the century of progress exposition in chicago, specifically with the intent of working on a number of scientific essays, which he continued to compile over his lifetime (and which would later find their way into notes on life). another area of special interest for dreiser was philosophy, a subject that he explored in great detail and about which he collected and wrote extensively. his tastes ranged from spencer to loeb and from social darwinism to marxism. his published and unpublished writings indicate that dreiser drew heavily on such philosophers and philosophies to confirm his own views of the nature of man and life. no biography of theodore dreiser would be complete, however, if it did not touch upon his personal life: as one friend put it, it is hard to understand how dreiser could be so concerned about humanity and at the same time so utterly cruel to an individual human being. his marriage to sara osborne white was on shaky ground from the start: he never seemed able to devote himself to one woman. as sara herself put it: "all his life [theo] has had an uncontrollable urge when near a woman to lay his hand upon her and stroke her or otherwise come into contact with her" (swanberg, ). the two separated in , with sara returning to missouri for a time (she would later move to new york on her own) and dreiser moving on to other women. in , helen patges richardson, a second cousin to dreiser (her grandmother and dreiser's mother were sisters), showed up at his doorstep, making the long journey from her home state of oregon to meet her new york cousins. she would become dreiser's companion for the rest of his life; they eventually married in . their relationship was stormy at best: dreiser never changing his ways with regard to other women, helen persisting—perhaps beyond all reason—in her devotion to his genius. as she phrased it: "he expected his complete freedom, in which he could indulge to the fullest, at the same time expecting my undivided devotion to him" (swanberg, ). in november helen had the first of several strokes that would eventually incapacitate her; she moved to oregon to live with her sister, myrtle butcher, and died in . in addition to his infidelities with regard to women, dreiser's professional relationships were periodically marred by scandal. he was in the habit of lifting material directly from sources and including it, for the most part, unchanged in his works. many readers of an american tragedy, for example, who lived in the herkimer county area (where the chester gillete-grace brown incident had occurred), wrote to dreiser concerned that his book contained sentences lifted directly from court documents or local newspapers. in it was announced by a knowing reader that dreiser's poem "the beautiful," published in the october issue of  vanity fair, was a plagiarism of sherwood anderson's poem "tandy." since dreiser and anderson were friends, the incident blew over rather quickly. such was not the case, however, in , when dorothy thompson accused dreiser of plagiarizing her serialized newspaper articles regarding her trip to russia (she and dreiser had been there together) in his book dreiser looks at russia (ms. thompson had published these articles in her own collected work,  the new russia, two months prior to dreiser's publication). ms. thompson filed suit against dreiser, and the press took dreiser to task on this and earlier cribs. although dorothy thompson eventually dropped her suit, it colored the opinion of some of dreiser's colleagues towards his works. it also led to another ugly incident in , when at a dinner at the metropolitan club honoring visiting russian novelist, boris pilnyak, sinclair lewis (dorothy thompson's husband and at that year's winner of the nobel prize in literature) stood up to speak to the gathered literary notables and, after stating his pleasure at meeting mr. pilnyak, added: "but i do not care to speak in the presence of one man who has plagiarized , words from my wife's book on russia" (swanberg, ). at the end of the reception that followed, dreiser walked over to lewis and demanded explanation. lewis repeated his accusation, at which point, dreiser slapped his face. lewis, undaunted, repeated the accusation a third time and received a second slap. again, the incident was widely publicized in the papers and fueled an aversion on the part of many for dreiser's private self. yet despite his personal and public scandals, dreiser's achievements in establishing a truly american literature and his one-man crusade for social justice set standards for those of his time and those who would follow. sherwood anderson, john dos pas sos, james t. farrell, edgar lee masters, h. l. mencken, upton sinclair—these and many others— acknowledged publicly or privately a debt owed to the example of dreiser. in a final tribute to dreiser, upon his death in , h. l. mencken wrote: ‥ no other american of his generation left so wide and handsome a mark upon the national letters. american writing, before and after his time, differed almost as much as biology before and after darwin. he was a man of large origi nality, of profound feeling, and of unshakeable courage. all of us who write are better off because he lived, worked and hoped. (swanberg, ) return to top » scope and contents the theodore dreiser collection at the university of pennsylvania library is the principal repository for books and documents concerning dreiser's personal and literary life. the collection at large includes dreiser's own library and comprehensive holdings in both american and foreign editions of his writings, as well as secondary works. at the heart of the collection, however, are the theodore dreiser papers. they comprise boxes and include correspondence; manuscripts of published and unpublished writings; notes; diaries; journals edited by dreiser; biographical material; memorabilia, including scrapbooks, photographs, postcards, promotional material, art, and personal possessions; financial and legal records; clippings covering dreiser's literary life, beginning with his career as a newspaper reporter in the s; and microfilms of material housed in this and other collections. also contained in the papers are correspondence, works, and memorabilia of dreiser's brother, paul dresser; his second wife, helen patges (richardson) dreiser; and his niece, vera dreiser scott. finally, the papers include works of fiction, nonfiction, and poetry that were sent to dreiser, as well as works that were written about him. although the papers contain documents dated as early as and as late as , the bulk of the materials falls between the years and . dreiser's initial bequest of materials to the university of pennsylvania occurred in ; shipments continued until , the last following helen dreiser's death. gifts and purchases have enriched penn's dreiser collection, including the papers, to such an extent that little of significance regarding dreiser's life and work is unavailable to the researcher working at penn. it is no accident that the university of pennsylvania became the home for theodore dreiser's papers. historically, the study of american literature was undervalued by english literature departments, which often exhibited a provincial subservience to english letters.[ ] at the university of pennsylvania, however, pioneers like arthur hobson quinn began teaching courses in the american novel in and in american drama in . dr. quinn believed that one reason for the neglect of american writing in colleges was that "the literature had been approached as though it were in a vacuum, divorced from unique historical and economic conditions which had produced it." [ ] emphasizing the necessity for an historical approach to the subject, he was instrumental in the adoption in of a curriculum in american studies by the graduate school of the university of pennsylvania and in by the undergraduate school. other penn faculty, such as e. sculley bradley and robert spiller, shared dr. quinn's devotion to and assessment of american studies. they actively sought to acquire the research materials that they deemed essential to an historical approach. in the late s, robert elias, a graduate student in the english department at penn, sought out dreiser in order to use dreiser's papers for his doctoral dissertation. penn faculty then approached dreiser about depositing his collection with the university. dreiser was aware of his place in the evolution of american literature and of the value of his papers to scholars and collectors. his first literary bequest was the manuscript of sister carrie, which was a gift to his frien d h. l. mencken. dreiser and mencken often discussed the final disposition of their papers and agreed that settling on one institution for an entire collection was better than dividing it among several. unfortunately, during periods of financial insecurity throughout his lifetime, dreiser offered various pieces of his literary legacy to collectors or auctioneers in return for ready cash. some of the manuscripts that were sold have found their way back to his own collection at penn through donations or purchases, but writings not accounted for here or in other collections are presumed to be in private hands or lost. it is unlikely that dreiser himself destroyed them, although others close to him may have done so to protect their privacy. he blamed his first wife, sara white dreiser, for the destruction of the first manuscript of the "genius" and it is known that she and her relatives destroyed some of his letters to her and bowdlerized others that are held by the university of indiana. although the university of pennsylvania has the largest and most comprehensive collection of dreiser's papers, there are some gaps in its coverage. over the years, penn has acquired photocopies and microfilms of some holdings from other collections, w hich are mentioned either in the container list or in an appendix. a study of the series description and the container list confirms that, with few exceptions, even those writing projects for which gaps exist are represented by enough material to give the researcher a sense of dreiser's plan for the work and its evolution as he worked it out from manuscript to publication. an annotated list of institutions with significant holdings on dreiser can be found in theodore dreiser: a primary bibliography and reference guide ( nd ed.), by donald pizer, richard w. dowell, and frederic e. rusch (boston: g. k. hall & co., ). dreiser was a prolific writer and correspondent and one who saved almost everything he wrote, from the initial notes for a piece of writing to the discarded pages from revised manuscripts. in addition to preserving his manuscripts, dreiser saved incom ing personal and business correspondence and made carbons of outgoing correspondence, especially after he began to have regular secretarial help in the s. he was a compulsive rewriter of his own work and enlisted the aid of friends, associates, and p rofessional editors in the work of revision. after a manuscript was transformed into a typescript, carbons of it were often circulated among his associates for their editorial suggestions. many of these copies, in addition to the drafts dreiser revised himself, are housed in this collection, so it is possible to determine some of the influences on dreiser's work and to better understand the way dreiser carried out the process of writing. correspondence is arranged alphabetically by correspondent and then chronologically within each correspondent's file. items of incoming and outgoing correspondence are interfiled. care should be taken by researchers not to remove or misplace the white interleaving sheets found in many folders; this paper is acting as a barrier to keep carbons of outgoing correspondence from acid-staining original letters housed next to them. unidentified correspondence is housed immediately after the alphabetical correspondence files. following the "unidentified correspondence" are two additional series of correspondence, one entitled  "miscellaneous correspondence," the other  "legal matters."  "miscellaneous correspondence" comprises two case files, one of materials relating to or collected by estelle kubitz williams, the other of correspondence relating to exhibitions or the collecting of dreiser's works by the los angeles public library .  "legal matters" consists of six distinct files pertaining to various legal matters involving dreiser. the governing criteria for separating correspondence from the alphabetical correspondence file was whether the material in a file was collected primarily by theodore or helen dreiser or by someone else. this rule explains why two other series, entitled  "paul dresser materials" and  "vera dreiser correspondence" have been separated from the alphabetical correspondence files and housed later in the coll ection under the general title  "family members." (it should be noted that, while  "paul dresser materials" contains a large addition of materials from outside sources, many items in it were indeed collected by theodore and helen dreiser; this file became so large, however, and contained so much material that was not correspondence that the decision was made to separate it from the main body of correspondence.) in organizing the manuscripts in this collection, consideration was given to dreiser's habits of writing, his own presumed plan or arrangement of his papers, the scope of penn's actual holdings, and the needs of researchers. the fact that the bulk of this collection has been at the university of pennsylvania since the late s and was opened to scholars before being completely processed makes dreiser's own organizational schema difficult to determine in . it is known that even before his papers were shipped to the university of pennsylvania they were reordered several times by his wife or assistants. it is also known that during the preliminary sorting at penn related items that had arrived clipped together were separated, and no record was ke pt of their original arrangement. over the years users of the collection have rearranged files and papers to suit the purposes of their own research and have neglected to restore what they moved to its original order. most unfortunately, some papers that arrived with the collection in the s have disappeared. how did dreiser's habits of research and writing influence the final arrangement of the papers? it is important to remember that he was an extremely productive writer in many genres: novels, essays, short stories, poetry, play scripts, and screenplay s. because his funds were often low, he wanted to recycle his publications so that they generated more than one income. for example, he wrote novel-length works but hoped to sell to the periodicals short pieces adapted from these longer works and thus t o collect a book royalty as well as a payment for the extracted piece. he followed this process in reverse: manuscripts originally sold and published as essays, poems, or short stories were often combined later and sold as book-length units. some books , such as an american tragedy, were adapted into play scripts and motion picture screenplays and thus could be marketed again. how to order these related writings both to preserve their integrity as particular genres and to show their relationship to one another was an important consideration in processing dreiser's papers. because many of dreiser's essays, short stories, poems, and play scripts were published both individually in periodicals and later as parts of collections of similar works, they could have been filed with others of the same genre or collected under the book title dreiser eventually chose for them. researchers should check the container list under td writings: books and the appendices for other relevant genres because sometimes a piece of writing, or versions of it, will be found in both locations. for example, the stories that comprise free and other stories and  chains are filed alphabetically in td writings: short stories because the university of pennsylvania dreiser papers lacks the "book manuscript" for these stories that is known to have existed at one time. by contrast, penn does have manuscripts, typescripts, and typesetting copy for the studies that were published in  a gallery of women, and dreiser's lists and correspondence indicate that he wanted these studies to be published as a unit even though he published some of them first in periodicals. thus, the researcher will find some of these essays in two places: tearsheets from the periodical publication of the essay filed alphabetically in td writings: essays and manuscripts and typescript s of the essays labeled by dreiser  a gallery of women housed under that title in td writings: books. in addition to recycling published works into other publications, dreiser sometimes used the same title for writings in two different genres. for example, an essay and a short story are both entitled "kismet"; "the factory" is the title for both an es say and a poem; "credo" is an essay but "the credo" is a short story; three poems bear the title "love" and two "life." using the same story line, dreiser wrote a playscript and a screenplay called "the choice." he wrote a playscript "solution" based on his short story of the same title. the appendices for all the genres should be consulted for titles so that the researcher does not overlook any relevant adaptations. the autobiographical character of much of dreiser's writing occasionally makes the distinction between an essay and a short story a problematic one. unless dreiser specified directly, his intent is impossible to recover at this point because the polic y followed for distinguishing between the two when the collection underwent its preliminary sorting in the s is unknown. with the exception of a few obvious misfilings, the stories and essays have been left in their pre- processing genre. resear chers should check both td writings: essays and td writings: short stories for titles. dreiser's work habits and filing practices also meant that some flexibility was required in defining authorship of the papers in this collection. sometimes dreiser developed an idea or a theme for a series of articles, whereupon he would contact lesser-known writers and ask them to compose essays on this theme, with the understanding that he would edit and perhaps rewrite the essays and have the series published under his name. occasionally the original writer of these pieces cannot be determined bec ause dreiser had the essay retyped under his name before submitting it to a publisher. because dreiser was the author of the idea for the series, as well as the author of one or more of the essays, all manuscripts in the series are housed in td writings: essays under the name of the series, with the name of the actual author of the essay (if known) noted on the folder. the same policy was followed for other works inspired by dreiser's ideas or writing s. dreiser's own identifying terminology is used to describe the contents of a folder unless it is clearly incorrect. most of the manuscript material from the dreisers was wrapped in brown paper or manila envelopes with a notation by dreiser or helen dre iser describing the contents. unfortunately, when the papers arrived at penn and were rehoused in the preliminary sort, some sources of identification were not documented on the folders. sources of identification that are questionable for any reason are so indicated on the folders. if the item was not identified originally or was identified incorrectly, a descriptive term has been supplied. in processing the theodore dreiser papers, extensive use was made of the biographies dreiser ( ), by w. a. swanberg, and the two-volume study  theodore dreiser: at the gates of the city, - ( ) and  theodore dreiser: an american journey, - ( ), by richard lingeman; the biographical study  forgotten frontiers: dreiser and the land of the free ( ), by dorothy dudley; the memoirs  my life with dreiser, by helen dreiser ( ),  theodore dreiser: a new dimension, by marguerite tjader ( ), and  my uncle theodore, by vera dreiser with brett howard ( ); the collections  letters of theodore dreiser: a selection ( vols.), edited by robert h. elias ( ),  dreiser-mencken letters: the correspondence of theodore dreiser & h. l. mencken - ( vols.), edited by thomas p. riggio ( ), and  theodore dreiser: american diaries - , edited by thomas p. riggio ( ); and the reference work  theodore dreiser: a primary bibliography and reference guide ( nd ed.), by donald pizer, richard w. dowell, and frederic e. rusch ( ). the last-mentioned work comprises not only a primary bibliography of the works of theodore dreiser, but also an annotated bibliography of writings about dreiser from to . endnotes [ ] in american literature and the academy kermit vanderbilt reviews in depth  "the embattled campaign to build respect for america's authors and create standards of excellence in the study and teaching of our own literature." his book was published in by the university of pennsylvania press. [ ] neda m. westlake, "arthur hobson quinn, son of pennsylvania,"  the university of mississippi studies in english, volume , , p. . return to top » administrative information publication information university of pennsylvania: kislak center for special collections, rare books and manuscripts,   finding aid author finding aid prepared by julie a. reahard and lee ann draud sponsor the processing of the theodore dreiser papers and the preparation of this register were made possible in part by a grant from the national endowment for the humanities and by the financial support of the walter j. miller trust use restrictions copyright restrictions may exist. for most library holdings, the trustees of the university of pennsylvania do not hold copyright. it is the responsibility of the requester to seek permission from the holder of the copyright to reproduce material from the kislak center for special collections, rare books and manuscripts. source of acquisition gift of theodore and helen dreiser with additional donations from myrtle butcher; louise campbell; harold j. dies; ralph fabri; mrs. william white gleason [dreiser-e. h. smith correspondence]; hazel mack godwin; paul d. gormley; marguerite tjader harris; r. sturgis ingersoll [manuscript for jennie gerhardt]; los angeles public library; f. o. matthiessen; vera dreiser scott; lorna d. smith; robert spiller [galleys for  the bulwark]; and estelle kubitz williams plus purchased additions, - . return to top » controlled access headings form/genre(s) clippings (information artifacts) contracts correspondence diaries essays financial records manuscripts, american-- th century memorabilia plays (performed works) poems short stories, american-- th century speeches writings (documents) personal name(s) dreiser, helen patges, - dresser, paul, - subject(s) authors authors, american authors, american-- th century families literature return to top » other finding aids for a complete listing of correspondents, do the following title search in franklin: theodore dreiser papers return to top » collection inventory i.  correspondence. series description this first extensive series contains letters written to and from theodore and helen dreiser, arranged alphabetically by correspondent, of which there are approximately , . within each correspondence file, letters are arranged chronologically. inco ming and outgoing correspondence has been interfiled. the researcher should keep in mind that letters may have crossed in the mail, especially in the case of foreign correspondence; a given letter may not have been received by dreiser or his correspondent when one of a later date was sent. at the end of the alphabetical correspondence files is the unidentified correspondence, arranged in chronological order where possible. the majority of dreiser's correspondence is work-related, pertaining to the various projects that he was working on at any given time. still, the list of names of those having significant personal correspondence with dreiser reads like a who's who among writers, artists, publishers, social critics, and notables of his time, for example, sherwood anderson, harry elmer barnes, jerome blum, franklin booth, a. a. brill, pearl buck, bruce crawford, floyd dell, ben dodge, john dos passos, angna enters, whar ton esherick, ralph fabri, james t. farrell, ford madox ford, charles fort, waldo frank, hutchins hapgood, dorothy dudley harvey, ripley hitchcock, b. w. huebsch, otto kyllmann, william c. lengel, horace liveright, edgar lee masters, h. l. mencken, frank norris, john cowper and llewelyn powys, grant richards, kathryn d. sayre, hans stengel, george sterling, dorothy thompson, carl van vechten, and charles yost. helen dreiser's correspondence appears in the files with theodore dreiser's, because she often served as principal contact for dreiser's friends and business associates: dreiser was often either ill or busy attempting to complete book projects (especially in the later years of his life, to ). while the larger correspondence files relating to dreiser's brother, paul dresser, and his niece, vera dreiser, have been moved to another section of the papers, the alphabetical correspondence series does contain family correspondence and some significant correspondence with personal friends of dreiser, such as that with his teacher, may calvert baker, and friends lillian rosedale goodman and kirah markham. the department of special collections has obtained some photocopies of dreiser letters housed in other repositories: these are filed just as if they were original documents. all such photocopies are so marked. receipts, canceled checks, and income tax returns are housed as series filed later in the papers. while some royalty statements do reside in the alphabetical correspondence section (when they came enclosed in letters from various publishing firms), the bulk is housed in the series titled "financial records." box folder a & c black, ltd. - alleman, marta. - allen, ben - american federation of labor ( - july ). - american federation of labor ( july - ) - american society of composers, authors and publishers. - american spectator - anderson, sherwood. - andrea, leonardo - austrian, delia. - author's and writer's who's who - baker & taylor co. - balch, jean allen - beard, lina. - beck, clyde - bicknell, george. - big brothers of america - bland, h. raymond. - blau, perlman & polakoff - boni & liveright ( - ). - boni & liveright, - . - boni & liveright ( - ) - bowdoin college. - bowen, croswell - brandt & brandt. - brandt theatres - brodsky, nauda auslien. - brody, paul a. - burns, lee. - burnside, l. brooks - campbell, louise ( - ). - campbell, louise, - , undated. - campbell, mary - chadwick productions. - chalian, edward - church management: journal of parish administration. - churchill, judith chase - cluett, peabody & co. - coakley, elizabeth - commonwealth college (mena, ark.). - communist party of the united states of america - constable & company ( - ). - constable & company ( - ) - cotton, mother emma. - coulter, ernest kent - the crusaders. - crutcher, ernest - curtis brown, ltd. ( - ). - curtis brown, ltd. ( - ) - davidson, jo. - davies, marion - delteil, caroline dudley. - demille, cecil b. - dimock & fink company. - dinamov, sergei - doty, douglas zabriskie. - doubleday, doran & company - dreier, thomas. - dreiser, albert j. - dreiser, helen patges. - dreiser, henry - dyer, francis john. - e. p. dutton - emeline fairbanks memorial library, terre haute, ind. - emergency committee for southern political prisoners - ettelson, samuel a. - ettinge, james a. - fabri, ralph ( - ). - fabri, ralph, - . - fabri, ralph ( - ) - fasola, f. b. - fassett, lillian - fischl, george. - fischler, joseph - ford hall forum (boston, mass.). - foreign policy association - freedman, may brandstone. - freeman, helen - geisel, k. - gelfand, hyman a. - goldberg, isaac. - golden, john - graham, marcus. - grand army of the republic - gunther, ferdinand. - guthrie, william norman - hampshire county progressive club. - hampton, david b. - harper & brothers ( - ). - harper & brothers ( - ) - hartwell stafford, publisher. - hartwick, harry - hedrick, t. k. (tubman k.). - heilbrunn, l. v. (lewis victor) - herdan, gerald s. - hergesheimer, joseph - hoffmann, w. - hofschulte, frank - howe, l. v. - howell, e. l. - hume, cameron & paseltiner ( - ). - hume, cameron & pasteltiner ( - ) - ilhardt, emil, mrs. - illes, bela - international league of leavers of footprints in the sands of time. - international literary bureau - isbey, h. e. f. - isham, frederic stewart - jenkins, william w. - jenks, george c. - johns hopkins university. - johnson, a. d. - juggler(notre dame, ind.). - jules c. goldstone agency - kelley, f. f. - kelly, fred c. (fred charters) - kerpel, eugen ( ). - kerpel, eugen ( - ) - the knoxville news-sentinel. - knudsen, paol - labor research association (u.s.). - labor temple school (new york, n.y.) - larrimer, mary. - larsh, theodora - lemon, willis s. - lengel, william c., - . - lenitz, josephine h. - liesee, edith m. - life(new york, n.y.) - livraria garnier. - llona, victor - lyons & carnahan. - m. witmark & sons - mccoy, esther ( - ). - mccoy, esther ( - ) - mack, hazel ( - , april). - mack, hazel ( may- ) - malmin, lucius j. m. - management ernest briggs (firm) - mason, walt. - masseck, c. j. - masters, edgar lee. - masters, marcia lee - meltzer, e., mrs. - mencken, h. l. (henry louis), - . - mencken, h. l. (henry louis), - . - mencken, h. l. (henry louis), - , undated. - mendelson, edna g. - milwaukee writers union. - mind, inc. - monahan, yvette. - monatshefte für deutschen unterricht - motuby, betty. - mount, richard - national committee for the defense of political prisoners ( ). - national committee for the defense of political prisoners ( - ) - nervous and mental disease publishing co. - nesbit, wilbur d. - new york library association. - new york mirror(new york, n.y.) - norstedts tryckeri. - the north american - washington place west holding corp. - o'neil, james - oxford university press. - p.e.n. czechoslovakia - patterson, william morrison. - pauker, edmond - pennsylvania railroad. - people's forum of philadelphia - piwonka, hubert. - plantin press - powys, john cowper. - powys, llewelyn - quintanilla, luis. - r - revue internationale des questions politiques diplomatiques et economiques. - rey, john b. - roberts, william. - robertson, john wooster - rossman, carl. - the rotarian - salzman, maurice. - sampson, emma - schilling, theodore. - schindler, h. - seldes, george. - seldon, lynde - simon, nelly. - simon and schuster, inc. - sinclair, elsie. - sinclair, upton - smith, edward h. ( - ). - smith, edward h. ( - ) - smith book company. - smyser, william leon - stalin, joseph. - stanchfield & levy - stoddart, dayton. - stokely, james - swarthmore college. - sweeney, ben - telephone subscribers protective league. - temple university woman's club - tomas, d. - toner, williams mcculloch - united press international. - united states. assistant secretary of state - university of iowa. - university of michigan - veritas press. - verlag j. engelhorns nachf. stuttgart - wake, b. h. - walburn, nancy - weiss, rudolph. - weissenberger, m. c. - whitlock, douglas. - whitman, charles sidney - willson, bob william. - wilson, charles morrow - wood, robert scofield. - woodbourne correctional facility - woythaler, erich. - wrenn, charles i. - youngblood, jean. - your lifezweiger, william l. & unidentified. - return to top » ii.  miscellaneous correspondence. series description this series is divided into two sections: estelle kubitz williams materials and materials relating to the los angeles public library's exhibitions and acquisitions of dreiser materials. estelle kubitz williams materials include correspondence between ms. williams and her sister marion; her husband arthur p. williams; and harold hersey. each of these is housed in a separate folder, organized chronologically. other titles in this series (all collected by ms. williams) are: recipes; jokes; typed fact s about european history; excerpts from books; poetry; lists of names; travel notes on jews and jerusalem; proverbs from different countries; and miscellaneous materials. the los angeles public library correspondence is housed in two folders arranged chronologically. one folder contains correspondence between the library and helen dreiser, the other between the library and lorna d. smith. box folder materials collected by or related to estelle kubitz williams. - files relating to the los angeles public library concerning dreiser exhibition and acquisitions, - . - return to top » iii.  legal matters. series description this series divides as follows: theodore dreiser's will, / box; publishers contracts, arranged alphabetically by publisher name, and copyrights arranged by book title, / boxes; foreign language contracts, box; dreiser's legal dealings with hor ace liveright theatrical productions, box; dreiser's legal battles with erwin piscator, box; dreiser's lawyers' files concerning various cases (including: dreiser v. dreiser; the "genius"; the paramount cases regarding  an american tragedy; and south american lawsuits pertaining to the publishing of  america is worth saving and  jennie gerhardt), box. finally, legal papers in volving the trial of the book  an american tragedy in boston and  the "genius" protest, box. box folder theodore dreiser's last will and testament. contracts: horace liveright, inc., - . contracts: g. p. putnam's sons, - . contracts: simon & schuster, inc., - . - contracts: world publishing company, - . contracts: university of pennsylvania, - . copyrights: "an address to caliban" -  "epitaph" . - copyrights: the financier -  "you, the phantom" . - contracts: argentina. contracts: austria. contracts: canada. contracts: czechoslovakia. contracts: denmark. contracts: england. contracts: finland. contracts: france. contracts: germany. contracts: holland. contracts: hungary. contracts: italy. contracts: japan. contracts: norway. contracts: poland. contracts: portugal. contracts: russia. contracts: south america. contracts: sweden. contracts: switzerland. contracts & correspondence: horace liveright theatrical productions, - . - correspondence & accounts: piscator-bühne (dramaturgie), - . - lawyers' files: dreiser v. dreiser, . lawyers' files: "the genius" , . lawyers' files: paramount publix corp. cases, - . - notes & clippings: paramount publix corp./ an american tragedycase, - . - south american lawsuits: america is worth saving & jennie gerhardt, - . - an american tragedy: trial of the book in boston, commonwealth of mass. v. donald s. friede, . - the "genius" : protest, . - the "genius" : lawsuit, theodore dreiser v. john lane co., . - the "genius" : memorandum of law re proposed moving picture production, . return to top » iv.  td writings: books. series description this series includes everything dreiser himself labeled a book manuscript, all works that were adapted by dreiser or someone else from one of his books, and secondary material used to promote his books or related works. the order of arrangement for each title is chronological, following the process of writing from initial planning to publication: notes and outlines, pamphlets, and other research materials; manuscripts; typescripts; printers' proofs; book jackets, dummies, and advertising copy; discarded manuscript fragments; and adaptations from the book. thus, under an american tragedy, researchers will find not only all manuscripts, typescripts, proofs, and dust jackets for the book, but also a tabloid and a condensed version of the novel; all the playscripts in english and other languages, plus playbills and programs from any of these versions that were actually produced; a scenario for an opera; and movie scripts from the  an american tragedy and the  a place in the sun. this series also includes all the material that dreiser filed under "philosophical notes." he intended to publish a book that clarified his philosophy of the meaning of life and the workings of the universe: these notes represent his research and efforts thereon. dreiser, however, died before finishing all the manuscripts for the project. because these materials ultimately did form the basis of a published book, notes on life ( ), they are located in this series.  notes on life represents a selection of the material found here and was edited by marguerite tjader. her papers for this work follow dreiser's notes. not included in this series, however, are a few "false starts" or beginnings of fictional works that dreiser may have intended to expand into novels but that remained unfinished, e.g., "mea culpa," "our neighborhood," and "the rake." these titles are located in the series notes written and compiled by td in boxes and under the heading "novels, unfinished." also not included in this series are published reviews of dreiser's books. reviews can be found in several locations. box contains miscellaneous clippings of reviews organized chronologically by title, but researchers should note the location of other reviews in the container list under the respective book titles. the amount of material listed for each title varies. penn's dreiser papers does not contain all of dreiser's book manuscripts in their original form, but the collection does include photocopies of some manuscript materials held by other institutions or individuals. such material is noted on the container list. as mentioned in the scope and content note, some books that contain previously published essays or stories (e.g., free and other stories) are not included in td writings: books, because penn's collection does not have an actual book manuscript as identified by dreiser. manuscripts for these shorter pieces are housed under their respective genre titles (e.g., short stories, plays). when dreiser's manuscripts were typed, he usually asked for an original and several carbons, which he then distributed to his friends for their comments and editorial suggestions. thus, some typescripts in the dreiser papers may contain revisions in a hand other than dreiser's; when this handwriting could be identified, the information was noted on the folder. the manuscripts, typescripts, and proofs are given dreiser's term of identification unless it is obviously incorrect. if no identifying term was assigned by dreiser, an arbitrary term has been supplied, based on the item's chronological position within penn's holdings for that book. therefore, if several typescripts of a book were unidentified or were all identified as "revised typescripts," they have been arranged chronologically and given designations such as "typescript a, b, c‥" if they are different typescripts or "typescript a," "typescript a, revised," and so forth, if they are revised versions of the same typescript. a.  sister carrie. note for reviews of sister carrie, see box box folder sister carrie: st typescript (chaps. i-xlvii). chaps. i-xlvii. - sister carrie: book jackets. sister carrie (pa. ed.): emendations in the copy-text by james l. w. west iii (chaps. i-xxix). description letter from west to neda westlake; note on comparison of handwriting of arthur henry and sara white dreiser on the typescript. sister carrie (pa. ed.): emendations in the copy-text by west (chaps. xxx-l). sister carrie (pa. ed.): rejected proof alterations and sample historical collation. sister carrie: two outlines by?. sister carrie: dramatization by h. s. kraft (dramatic outline; acts i, ii, iii). - sister carrie: dramatization by h. s. kraft (?) (acts i, ii, iii). - sister carrie: dramatization by john howard. sister carrie: dramatization by kathryn sayre (synopsis of scenes; prologue, acts, i, , ). - sister carrie: dramatization by kathryn sayre (prologue, acts , , ). - sister carrie: synopsis by elizabeth kearney. sister carrie: screen adaptation by helen richardson. b.  jennie gerhardt. box folder jennie gerhardt ("the transgressor"). description sample front cover and title page; typeset pages; ms from which typeset pages were made; note from james l. w. west iii; note about sale of ms. jennie gerhardt: early ms (chaps. i-x). - jennie gerhardt: early ms (chap. x-xii). jennie gerhardt: early ms (chap. xii (conc.); chap. xiii; earlier version of chap. xii; fragment of early version of chap. xii). jennie gerhardt: early ms (chaps. xiv-xxv)). - jennie gerhardt: early ms (chaps. xxvi; xviii; another version of xxvi?). jennie gerhardt: early ms (unnumbered chap. that follows chap. xxvi). jennie gerhardt: early ms (chaps.xxvii-xxix). - jennie gerhardt: early ms (chap. xxx; also other chaps.?). jennie gerhardt: ms (chaps. xiv-xxxvi). - jennie gerhardt: ms (chaps. xxxvii-lx). - jennie gerhardt: annotated typescript (chaps. i-xiii). - jennie gerhardt: typescript (chaps. i-xxx). - jennie gerhardt: book jackets. jennie gerhardt: lists of people to receive complimentary copies. jennie gerhardt: outline for a play?. "the story of jennie," playscript by? (acts i,ii). - c.  the financier, the titan, and  the stoic. box folder dates td worked on the financier, the titan, and  the stoic. notes on characters in the financier. notes on characters in the titan. notes for the financier and  the titan. - notes for the financier and  the titan. - the financier: original ms. (chaps. i-xliii), . - the financier: original mas. (chaps. xliv-li), . - the financier: original ms. (chaps. - ), . - the financier: original ms. (chaps. - ), . - the financier: original ms. (chaps. lxxi- ), . - the financier: typescript carbon (chaps. i-xxxviii), . - the financier: page proofs, . the financier: typescript carbon (chaps. i-lxx), . - the financier: st galleys, . the financier: revised galleys, . "the cowperwood story," a streamlined plot synopsis of  the financier, the titan, and  the stoic, version . "the cowperwood story," version . - the financier and  the titan: synopses by?. - the financier: synopsis by alvin g. manuel, annotated by td. the financier: synopsis by lorna d. smith. the financier and  the titan: synopses by elizabeth kearney. - the financier: book jackets. the financier: advertising copy, with additions by anna tatum. the financier: dramatization by rella abell armstrong of  the financier & the titan,annotated by td. - the financier: dramatization by rella abell armstrong of  the financier and  the titan. - the financier: scenario by rella abell armstrong. d.  a traveler at forty. note for reviews of a traveler at forty, see box . box folder a traveler at forty: diary notes, nov. - jan. . - a traveler at forty: diary notes, jan. -march . - a traveler at forty: drawings made for td by other travelers. a traveler at forty: diary notes, march - april . - a traveler at forty: newspaper clippings re the sinking of  the titanic, april - . a traveler at forty: typescript (chaps. i-xlvi). - a traveler at forty: typescript (chaps. xlvii- ). - a traveler at forty: revised typescript (chaps. -xi). - a traveler at forty: revised typescript (chaps. - ). - a traveler at forty: revised typescript,  "the quest for my ancestral home" . a traveler at forty: revised typescript,  "the berlin public service" . a traveler at forty: revised typescript,  "night-life in berlin" . a traveler at forty: revised typescript. - a traveler at forty: excerpts for advertising purposes?. a traveler at forty: advertising or review copy?. e.  the titan. box folder the titan: ms (chaps. i- ). - the titan: ms (chaps. xxvii-l). - the titan: ms (chaps. li-lxxiv). - the titan: ms (chaps. lxxv-xc). - the titan: ms (chaps. - ). - the titan: ms (chaps. - ). - the titan: ms (chaps. xci-xcii). - the titan: ms (chaps. cii-ciii). - the titan: typescript carbon (chaps. i- ); with editing by anna tatum (typed from ms in boxes and ). - the titan: chap. ; revised typescript and retyped version, with editing by anna tatum. - the titan: chap. (ms); chap. (typescript typed from ms chap. ). - the titan: chap. (ms); chap. (typescript typed from ms chap. , pages missing). - the titan: chap. (ms); chap. (typescript typed from ms chap. ). - the titan: chap. (ms); chap. (typescript typed from ms chap. ). - the titan: chap. (ms); chap. (typescript typed from ms chap. ). - the titan: chap. . the titan: chaps. - . - the titan: chaps. cii, ciii. the titan: st revised galleys. the titan: nd revised galleys. the titan: ms and typescript fragments from various versions. - the titan: book jacket. "law and lawyers," written for  the titan?. the titan: scenes to make a play. f.  the "genius" . note for reviews of the "genius", see box . box folder the "genius": ms (chaps. i-xxx). - the "genius": ms (chaps. xxxi-lx). - the "genius": ms (chaps. lxi-xc). - the "genius": ms (chaps. xci-cv). - the "genius": lst typescript a (chaps. i-lxxix [ st typescripts a and b begin to diverge at chap. lxxviii]). description st typescripts a and b begin to diverge at chap. lxxviii. - the "genius": st typescript a (chaps. lxxx-ciii). - the "genius": revised typescript (chap. civ). the "genius": st typescript a (chap. cv). the "genius": st typescript b (chaps. i-xlvi). - the "genius": st typescript b (chaps. xlxii-civ). - the "genius": revised typescript. - the "genius": book jackets. the "genius": st german printing. the "genius": galley proofs. the "genius": long and short résumés of the book by lorna d. smith; synopsis of a screen adaptation by?. the "genius": ideas for dramatization. the "genius": letter to louise campbell with versions of dramatizations. the "genius": proposals by td for a play or movie version; newspaper clipping. "the stuff of dreams" ( the "genius") play: st draft. - the "genius": summary of a play version by td. the "genius": proposal for a play version by td; prologue. - the "genius": play version by td. - the "genius": dramatic adaptation by?. - the "genius": dramatization by?. - the "genius": a play based on td's novel by odin gregory. - the "genius": discarded fragments and versions from acts i and ii of typescripts in boxes and . - the "genius": discarded fragments and versions from acts iii and iv and final scene. - the "genius": criticism and comments on the novel. the "genius": pages from a scrapbook with clippings of reviews. the "genius": documents pertaining to the book's suppression. the "genius": miscellaneous. the "genius": magazine version, published in  metropolitan magazine, . - g.  a hoosier holiday. note see box for the postcards that td collected on his trip to indiana, which was the basis of a hoosier holiday. box folder a hoosier holiday: diary notes. - a hoosier holiday: maps and schedules re trip to indiana. note see box , folder for oversize map. a hoosier holiday: ms. - a hoosier holiday: ms. - a hoosier holiday: typescript with additions by td and?. - a hoosier holiday: sample copy of jacket; corrections for galleys. a hoosier holiday: book jacket. a hoosier holiday: miscellaneous. "from , by theodore dreiser," printed version of article in  the hoosier, . a hoosier holiday: st galleys (?). a hoosier holiday: revised galleys (?). h.  twelve men. note for reviews of twelve men, see box . box folder twelve men:  "my brother paul," printed version. twelve men: notes and essays relating to  "the country doctor" . - twelve men:  "heart bowed down" (  "the village feudists" ). twelve men:  "the village feudists,"  reprint published in famous story magazine. twelve men:  "sonntag-a record" (  "w.l.s." ). twelve men:  "w.l.s.," printed version. twelve men: notes and clippings on the robin case used for  "vanity, vanity saith the preacher" . - twelve men: book jackets. twelve men: corrected page proofs. i.  hey rub-a-dub-dub. box folder hey rub-a-dub-dub: notes. hey rub-a-dub-dub:  "hey rub-a-dub-dub" . - hey rub-a-dub-dub:  "change," version published in  new york call ( ). hey rub-a-dub-dub:  "change" . - hey rub-a-dub-dub:  "some aspects of our national character" . hey rub-a-dub-dub:  "the dream" . hey rub-a-dub-dub:  "the american financier" . - hey rub-a-dub-dub: (  "the toil of the laboring man" ). hey rub-a-dub-dub:  "the toil of the laborer" (  "the toil of the laboring man" ). hey rub-a-dub-dub:  "personality" . - hey rub-a-dub-dub:  "secrecy" . hey rub-a-dub-dub:  "neurotic america and the sex impulse" . hey rub-a-dub-dub:  "ideals, morals, and the daily newspaper" . - hey rub-a-dub-dub:  "equation inevitable" . - hey rub-a-dub-dub:  "ashtoreth" . - hey rub-a-dub-dub:  "the reformer" . hey rub-a-dub-dub:  "marriage and divorce: an interview" . - hey rub-a-dub-dub: (  "more democracy or less? an inquiry" ). hey rub-a-dub-dub:  "more democracy or less? an inquiry" . - hey rub-a-dub-dub:  "the essential tragedy of life" . - hey rub-a-dub-dub:  "life, art, and america" . hey rub-a-dub-dub:  "the court of progress" . hey rub-a-dub-dub:  "neurotic america and the sex impulse" and  "some aspects of our national character," printed versions. j.  newspaper days. note for reviews of newspaper days, see box . box folder newspaper days: topics to be covered; notes for catalog copy. newspaper days: miscellaneous. newspaper days: ms. - newspaper days: ms. - newspaper days: st typescript. - newspaper days: typescript a with td's corrections. - newspaper days: "yellow manuscript". - newspaper days: nd typescript. - newspaper days: unrevised nd typescript. - newspaper days: copy of typesetting copy (chaps. i-xlv). - newspaper days: copy of typesetting copy (chaps. xlvi-lxxx). - newspaper days: index to st edition of  a book about myself (  newspaper days) edited by t. d. nostwichitle, . newspaper days: book jackets for  a book about myself (  newspaper days). newspaper days: foreword and author's note to edition, . newspaper days: corrected galley proofs and note. newspaper days: uncorrected galley proofs, with missing pages from chap. xxxvi included. newspaper days: bound vol. of corrected page proofs. newspaper days: bound vol. of corrected page proofs. k.  the color of a great city. note for reviews of the color of a great city, see box . box folder the color of a great city: proposed chapter order. the color of a great city: foreword by td. the color of a great city: "a week with ocean pilots" (version of "log of a harbor pilot"). the color of a great city: "bums". the color of a great city: "the car yard". the color of a great city: "the flight of pidgeons". the color of a great city: "on being poor". the color of a great city: "six o'clock". the color of a great city: "the toilers of the tenements" ("the inspector"). the color of a great city: "the inspector". the color of a great city: ("the end of a vacation"). the color of a great city: "the track walker". the color of a great city: "the realization of an ideal". - the color of a great city: "the pushcart man". - the color of a great city: "the bread line". - the color of a great city: "our red slayer". - the color of a great city: "whence the song". the color of a great city: "characters". - the color of a great city: "the beauty of life". - the color of a great city: "the way place of the fallen". the color of a great city: "a way place of the fallen". the color of a great city: "bayonne" (a version of "a certain oil refinery"). the color of a great city: "the bowery mission". - the color of a great city: "the wonder of the water". the color of a great city: "the man on the bench". - the color of a great city: "the men in the dark". - the color of a great city: "the men in the snow". the color of a great city: "the freshness of the universe". the color of a great city: "the freshness of the universe". the color of a great city: "the cradle of tears". the color of a great city: "the sandwich man". the color of a great city: "the sandwich man". the color of a great city: "the love affairs of little italy". the color of a great city: "christmas in the tenements". the color of a great city: "christmas in the tenements". the color of a great city: "the rivers of the nameless dead". the color of a great city: "the rivers of the nameless dead". the color of a great city: foreword by td. the color of a great city: "the city of my dreams". the color of a great city: "the city awakes". the color of a great city: "the waterfront". the color of a great city: "the log of a harbor pilot". the color of a great city: "bums". - the color of a great city: "the michael j. powers association". the color of a great city: "the fire". the color of a great city: "the flight of pigeons". the color of a great city: "on being poor". the color of a great city: "six o'clock". the color of a great city: "the toilers of the tenements". the color of a great city: "the end of a vacation". the color of a great city: "the track walker". the color of a great city: "the realization of an ideal". the color of a great city: "the pushcart man". the color of a great city: "manhattan beach" ("a vanished seaside resort"). the color of a great city: "the bread line". the color of a great city: "our red slayer". the color of a great city: "when the sails are furled". the color of a great city: "characters". the color of a great city: "the beauty of life". the color of a great city: "the way place of the fallen". the color of a great city: "hell's kitchen". the color of a great city: "a certain oil works refinery". the color of a great city: "the bowery mission". the color of a great city: "the wonder of the water". the color of a great city: "the man on the bench". the color of a great city: "the men in the dark". the color of a great city: "the men in the storm". the color of a great city: "the men in the snow". the color of a great city: "the freshness of the universe". the color of a great city: "the cradle of tears". the color of a great city: "the sandwich man". the color of a great city: "the love affairs of little italy". the color of a great city: "christmas in the tenements". the color of a great city: "the rivers of the nameless dead". the color of a great city: typesetting version; note from td. - the color of a great city: book jacket. the color of a great city: early galleys, with illustrations attached by td, oct. the color of a great city: early galleys, proofreader's copy(?). the color of a great city: early galleys, with td's corrections. the color of a great city: rd revised galleys, with original and substituted preface, oct. the color of a great city: rd revised galleys, unmarked, missing p. of foreword and some pages from last essay. l.  an american tragedy. box folder an american tragedy: original ms (chaps. iv-xx), - . - an american tragedy: typescript of ms (chaps. i-xx), - . - an american tragedy: book i, ms (chaps. i- ). - an american tragedy: book ii, ms (chaps. i- ). - an american tragedy: book ii, ms (chaps. - ). - an american tragedy: book ii, ms (chaps. - ). - an american tragedy: book ii, ms (chaps. - ). - an american tragedy: book iii, ms (chaps. - ). - an american tragedy: book iii, ms (chaps. - ). - an american tragedy: book iii, ms (chaps. - ). - an american tragedy: book ii, typescript b (chaps. xxx-liv). - an american tragedy: book ii, typescript b (fragments). description although chapter numbering is not continuous, events discussed in typescript b follow immediately the events discussed in typescript a in box ; some editing of typescript b by sally kussell. an american tragedy: book ii, revised typescript a (chaps. i-xxi) revised by louise campbell; few additions by td. - an american tragedy: book iii, typescript c (chaps. i-ii). description some revisions of chaps. in this box by louise campbell and ?. - an american tragedy: book iii, revised typescript c (chap. ii). an american tragedy: book iii, revised typescript c, with corrections (chap. ii and a fragment). an american tragedy: book iii, typescript c (chaps. -xxi). - an american tragedy: book iii, typescript c (chaps. xxii-xxxv). - an american tragedy: book i, st typescript (chaps. i, ii). an american tragedy: book i, final revised typescript? (chaps. i-xxix). - an american tragedy: book ii, final revised typescript? (chaps. i-xxxxix) revisions by td, louise campbell, helen dreiser, t. r. smith, and?. - an american tragedy: book iii, revised typescript c (chaps. i-xxxiv). - an american tragedy: front matter pages for typesetting. an american tragedy: book i, typesetting copy (chaps. i-xix). - an american tragedy: book ii, typesetting copy (chaps. i-xxxiv). - an american tragedy: book ii, typesetting copy (chaps. xxxv-xlviii). - an american tragedy: book iii, typesetting copy (chaps. i-xxxv). description gap in chapter numbering, but nothing missing. - an american tragedy: book jackets and hard cover. an american tragedy: condensed version, published in  bestsellers, oct. . an american tragedy: book ii, revised typesetting carbon (chaps. i-xi, xiii-xlv, xlvii-xlix). - an american tragedy: book i, author's galleys. an american tragedy: book ii, author's galleys. an american tragedy: book iii, author's galleys. an american tragedy: book i, revised pages. an american tragedy: book ii, st pages. an american tragedy: book ii, revised pages. an american tragedy: book iii, st pages. an american tragedy: dramatization by frederick thon. - an american tragedy: dramatization by patrick kearney. - an american tragedy: dramatization by georges jamin and jean servais. - an american tragedy: tabloid version. an american tragedy: dezso d'antalffy scenario for an opera. an american tragedy: dramatization by erwin piscator. - an american tragedy: dramatization by erwin piscator and lina goldschmidt. - case of clyde griffiths [  an american tragedy]: dramatization by piscator and goldschmidt. an american tragedy: dramatization by erwin piscator and lina goldschmidt. eine amerikanische tragödie: dramatization by erwin piscator. - the law of lycurgus (  an american tradegy): dramatization by h. basilewsky. - de tragedie van clyde griffiths (  an american tragedy): dutch-language dramatization. an american tragedy: film scenario by s. m. eisenstein, g. v. alexandrov, and ivor montagu. - an american tragedy: josef von sternberg-samuel h. hoffenstein film. description st yellow script, annotated by ?, jan. ; synopsis by eleanor mcgeary; sequences a-z, aa-hh. - an american tragedy: sternberg-hoffenstein film. description white script, feb. , sequences a-z, aa-ii. - an american tragedy: sternberg-hoffenstein film. description form # , release dialogue script, july , reels - . - a place in the sun (  an american tragedy): harry brown and michael wilson film final white film script with changes, sept. . - an american tragedy: miscellaneous notes. m.  moods. box folder moods: typesetting copy for and editions. - moods ( ed.): typesetting copy for poems added to this ed. - moods ( ed.): galley proofs, with revisions, of poems added to this ed. moods ( ed.): page proofs, with revisions, of poems added to this ed. moods ( ed.): typesetting copy, introduction by sulamith ish-kishor; contents pages. moods ( ed.): contents page. moods ( ed.): typesetting copy for poems. - moods ( ed.): poems rejected for this ed. (never published). n.  dreiser looks at russia. box folder dreiser looks at russia: diary kept by td in russia, and used in writing this work, - . dreiser looks at russia: contents page; "russia ", . dreiser looks at russia: "russia ", . dreiser looks at russia: "the tyranny of communism". dreiser looks at russia: "the capital of communism". - dreiser looks at russia: "moscow". - dreiser looks at russia:"communism theory and practice". dreiser looks at russia: "the tyranny of communism". dreiser looks at russia: "a former capital of tyranny". dreiser looks at russia: "some russian factories and industries". dreiser looks at russia: "religion in russia". dreiser looks at russia: "present day art in russia". dreiser looks at russia: "bolshevik art literature music (a)". dreiser looks at russia:"bolshevik art, literature, music (b)". dreiser looks at russia:"three russian restaurants". dreiser looks at russia:"russian restaurants—three". dreiser looks at russia: "propaganda plus". dreiser looks at russia: fragment of chap. on propaganda. dreiser looks at russia: fragment of chap. on peasant problem. dreiser looks at russia: "russian vignettes". dreiser looks at russia:"the russian versus the american spirit". dreiser looks at russia:"the russian versus the american temperament". dreiser looks at russia:"random reflections". dreiser looks at russia:"the current soviet economic plan". dreiser looks at russia: typesetting copy (chaps. i-xviii). - dreiser looks at russia: book jacket and hard cover. dreiser looks at russia: revised galley proofs. dreiser looks at russia: nd revised galley proofs. dreiser looks at russia: page proofs. o.  a gallery of women. box folder a gallery of women: proposed chapters. a gallery of women: "mary pyne" ("esther norn"). - a gallery of women: "m.t." ("regina c—"). a gallery of women: "yvonne (ellen) adams wrynn". - a gallery of women: "ida hauchawout". - a gallery of women: "gloom". a gallery of women: "lucia". a gallery of women: "ernita". - a gallery of women: "albertine". - a gallery of women: "dinan". a gallery of women: "m.j.c." ("emanuela"). - a gallery of women: "mrs. hevessy" ("bridget mullanphy"). - a gallery of women: "a daughter of the puritans". note not used in book; see also "this madness: the story of elizabeth," in td writings: essays. - a gallery of women: "ernestine". - a gallery of women: "mary pyne" ("esther norn"). a gallery of women: "esther norn". a gallery of women: "rella". - a gallery of women: "reina". - a gallery of women: "regina c—". - a gallery of women: "yvonne (ellen) adams wrynn". - a gallery of women: "ellen adams wrynn". a gallery of women: "a daughter of the puritans". - a gallery of women: "spaff" ("giff"). - a gallery of women: "giff". a gallery of women: "out of the city of the prophet" ("olive brand"). - a gallery of women: "olive brand". - a gallery of women: "lolita". - a gallery of women: "ida hauchawout". - a gallery of women: "gloom". a gallery of women: "loretta". - a gallery of women: notes on psychology of women, parts of which were used in "loretta". a gallery of women: "lucia". - a gallery of women: "ernita". - a gallery of women: "albertine". - a gallery of women: "emanuela". - a gallery of women: "mrs. mullanphy" ("bridget mullanphy"). a gallery of women: "bridget mullanphy". a gallery of women: "bridget mullanphy". a gallery of women: "rona murtha". - a gallery of women: st galley proofs with author's corrections. a gallery of women: nd galley proofs. a gallery of women: vol. i. - a gallery of women: vol. ii. - a gallery of women: book jackets. a gallery of women: hard covers for book. - a gallery of women: preface to the russian edition by sergey dinamov. "a gallery of women:" radio adaptation by william watters. "a gallery of women:" screen adapt. by helen mitchell, . p.  my city. box folder my city: clipping and xerox. my city: color proofs of etchings by max pollak used in book. q.  dawn. box folder dawn: xerox of ms at lilly library (chaps. i-xx), editing on ms by td and anna tatum. - dawn: xerox of ms at lilly library (chaps. xxi-xl). - dawn: xerox of ms at lilly library (chaps. xli-lx). - dawn: xerox of ms at lilly library (chaps. lxi-lxxvii). - dawn: xerox of ms at lilly library (chaps. lxxix-lxxx) and note from helen dreiser re chap. lxxviii. - dawn: xerox of ms at lilly library (chaps. lxxxi-xcvii). - dawn: xerox of ms at lilly library (chaps. xcviii-cvi). - dawn: xerox of st rough emended typescript at lilly library (chaps. i-iii). dawn: xerox of st rough emended typescript at lilly library (chap. iv). dawn: xerox of st rough emended typescript at lilly library (chap. v). dawn: xerox of st rough emended typescript at lilly library (chaps. vi-xxxii). - dawn: st typescript (chaps. xxx-[xciii]). arrangement the chapters in this box follow consecutively those in box even though the numbering system does not. - dawn: nd(?) typescript (chaps. i-xxxiv). - dawn: note from kathryn sayre, circa . dawn: sample pages, typeset. dawn: book jacket and book dummies. dawn: st bound copy. dawn: french translation (chaps. - and unnumbered). - dawn: french translation (unnumbered chaps.). - dawn: new french translation (chaps. i-xxix), . - r.  tragic america. box folder tragic america: plan(s) of book and partial outline of topics to be covered. tragic america: "preface". tragic america: "as america looks now" ("the american scene"). tragic america: "i visit an actual mill town" [part of "present day living conditions for many"]. tragic america: "exploitation—rule by force" ("exploitation—the american rule by force"). tragic america: "our banks and corporations as government (a)" (version ). - tragic america: "our banks and corporations as government (a)" (versions and ). - tragic america: "our banks and corporations as government (b)". tragic america: "the profits of our american railways from their inertia (a)" ("our american railways--their profits and greed"). tragic america: "the profits of our american railway from their inertia (b)" ("our american railways—their profits and greed"). tragic america: "government operation of the express companies for private profit". tragic america: "the supreme court as a corporation service station" ("the supreme court as a corporation-minded institution"). tragic america: "the constitution as a scrap of paper". tragic america: "the position of labor". tragic america: "the growth of police power". tragic america: "abuse to the individual" ("the abuse of the individual") (version ). - tragic america: "abuse to the individual" "the abuse of the individual") (version ). tragic america: "charity and wealth in america" (version ). tragic america: "charity and wealth in america" (version ). - tragic america: "crime and why". tragic america: "why the ballot?". tragic america: "why government ownership?". tragic america: "analysis of statecraft for the future" ("suggestions toward a new statecraft"). - tragic america: "what the meaning of education should be". tragic america: correspondence re "a sample trust". description extra chap. meant for nd edition of tragic america. tragic america: "a sample trust". description chapter not used in book, written by kathryn sayre. - tragic america: "a sample trust". description by kathryn sayre, edited by anna tatum (typescript); xerox of tatum letter. tragic america: "a sample trust". description by kathryn sayre, jan. , with comments by evelyn light (typescript). tragic america: typesetting copy. - tragic america: translator's note comparing american wages with american living costs. tragic america: corrections to be made in future printings. tragic america: corrections sent to td by kathryn sayre. tragic america: book jackets. tragic america: miscellaneous. note see also box , folder , for excerpts of tragic america in italian in  ottobre. tragic america: translation into french of chap. ("who owns america?") and chap. ("is america dominant?"). tragic america: carbon of typesetting copy. - tragic america: st galley proofs, revised. tragic america: st galley proofs with corrections. tragic america: nd galley proofs. tragic america: nd galley proofs with corrections. tragic america: page proofs. s.  america is worth saving. box folder america is worth saving: letter and notes from oskar piest; plan of book and copies of piest's notes as revised by td. america is worth saving: "are the masses worth while". america is worth saving: notes and clippings for "will american democracy endure?". - america is worth saving: notes and clippings for "what should be the objectives of the american people?". america is worth saving: notes and clippings for "has america a `save the world' complex?". - america is worth saving: notes and clippings for "what are the defects of american democracy?". - america is worth saving: notes and clippings for "what is democracy?". america is worth saving: notes and clippings for "scarcity and plenty". america is worth saving: notes and clippings for "europe and its entanglements". america is worth saving: notes for "english critics of english imperialism". america is worth saving: notes for "can the british endure?". america is worth saving: notes and clippings for "has england democratized the peoples of its empire?". america is worth saving: "have english and american finance cooperated with hitler to destroy democracy?". america is worth saving: notes and clippings for "does england love us as we love england?". america is worth saving: notes for "how democratic is england?". america is worth saving: notes and clippings for chapters on england. - america is worth saving: notes and clippings for russia. - america is worth saving: notes and clippings for "the lesson of france". - america is worth saving: notes and clippings for "practical reasons for keeping out of war". - america is worth saving: notes and clippings for "a few kind words for your uncle samuel". america is worth saving: notes and clippings for chaps. on america. - america is worth saving: clippings on tom mooney case. america is worth saving: foreword. america is worth saving: contents and chap. , "does the world move?". america is worth saving: chap. , "scarcity and plenty". america is worth saving: chap. , "europe and its entanglements". america is worth saving: chap. , "has america a 'save the world' complex?". america is worth saving: chap. , "practical reasons for keeping out of war". america is worth saving: chap. , "does england love us as we love england?". america is worth saving: chap. , "how democratic is england?". america is worth saving: chap. , "has england democratized the peoples of its empire?". america is worth saving: chap. , "english critics on [of] english imperialism". america is worth saving: chap. , "has england done more for its people than nazism [fascism] or communism [socialism]?". america is worth saving: chap. , "what is democracy?". america is worth saving: chap. , "what are the defects of american democracy?". america is worth saving: chap. , "what are the objectives of american finance?". america is worth saving: chap. , "have english and american finance cooperated with hitler to destroy democracy?". america is worth saving: chap. , "can the british empire endure?" ("can the british endure?"). america is worth saving: chap. , "will american democracy endure?". america is worth saving: chap. , "the lesson of france". america is worth saving: chap. [ ], "what should be the objectives of the american people?". america is worth saving: chap. [ ], "a few kind words for your uncle samuel". america is worth saving: chap. [ ], "a few kind words for your uncle samuel. america is worth saving: typesetting copy of book revisions by td, helen dreiser, william lengel, and?. - america is worth saving: discarded typescript fragments. america is worth saving: lawyer's list of potentially libelous statements and td's responses. america is worth saving: st unrevised galley proofs containing material later omitted. america is worth saving: st page proofs. t.  the bulwark. box folder the bulwark: xerox of letter from louise campbell re origin of early ms; synopsis of characters. the bulwark: early ms (chaps. i, ii). - the bulwark: early ms (chap. iii). - the bulwark: early ms (chap. iv). - the bulwark: early ms (chap. v). the bulwark: early ms (chap. vi). - the bulwark: early ms (chap. vii). the bulwark: early ms (chap. viii). - the bulwark: early ms (chap. x). - the bulwark: early ms (chap. xi). - the bulwark: early ms (chap. xii). the bulwark: early ms (chap. xiii). the bulwark: early ms (chap. xiv). - the bulwark: early ms (chap. xv). - the bulwark: early ms (chap. xvi). - the bulwark: early ms (chap. xvii). the bulwark: early ms. - the bulwark: copy meant for publicity for publication. the bulwark: financial version (?) (chaps. i-iv); notes by td and marguerite tjader harris. description some chaps. incomplete; numbers at bottom of pages should be disregarded. - the bulwark: financial version(?) (chap. v). - the bulwark: financial version(?) (chap. vi?). the bulwark: financial version(?) (chaps. xi-xxiv). - the bulwark: financial version(?) (chaps. xxvi-xxvii). - the bulwark: financial version(?) (ms fragments [some written by estelle kubitz). the bulwark: financial version(?) (chaps. i-xxvii). - the bulwark: green hard cover and pages found inside. - the bulwark: red hard cover; early typeset version of chap. i. the bulwark: papers found inside red hard cover. - the bulwark: notes and fragments on quakerism; some copied by helen dreiser. - the bulwark: ms (chaps. ii-xxxvii). - the bulwark: order and contents for chaps. for part ii; typed summary of end of part i. description includes chaps. that were originally marked for part ii. the bulwark: ms (part ii). description some chaps. incomplete; notes on ms by marguerite tjader harris; numbers on bottom of pages should be disregarded. - the bulwark: ms (part ii). - the bulwark: ms (part iii). - the bulwark: discarded ms fragments (part i). the bulwark: discarded ms fragments (part ii). the bulwark: discarded ms fragments (part iii), some dictated by td to marguerite tjader harris. the bulwark: early typescript (part i). - the bulwark: early typescript (part ii, chaps. - ). the bulwark: early typescript (part ii, chaps. - ). - the bulwark: early typescript (part iii, chaps. - , finis). - the bulwark: typescript, - . description dates td worked on this version after beginning again in the s [the - typescript extends into ; parts i and ii are divided differently in the final version; numbers on the bottom of pages should be disregarded.]. the bulwark: typescript, - . description sample chaps. i-iv sent to balch of g. p. putnam's sons, . - the bulwark: typescript (part i, chaps. i-xxxv), - . general note (multiple versions of some chaps.) [handwritten corrections on these chaps. by td, helen dreiser, marguerite tjader harris] - the bulwark: typescript (part ii, chaps. a (xxxvi)-e), - . - the bulwark: revised typescript (part i: chaps. i- ); corrections by td, helen dreiser, marguerite tjader harris, - . - the bulwark: outline of plots and chapters as planned with note about completion of  the bulwark, oct. . the bulwark: unedited typescript. description folder and note by marguerite tjader harris [part i typed by helen dreiser; parts ii and iii typed by marguerite tjader harris]. the bulwark: unedited typescript (part i: introduction, chaps. i-xxiv), . - the bulwark: unedited typescript (part ii: chaps. xxv-li), . - the bulwark: unedited typescript (part ii: chap. lii; part iii: chaps. liii-lvi), . the bulwark: unedited typescript (part iii: chaps. lvii-lxx, finis), . - the bulwark: edited typescript (part i: introduction, chaps. i-ii), . description note from marguerite tjader harris [corrections in edited typescript by helen dreiser, marguerite tjader harris, louise campbell; part i typed by helen dreiser; parts ii and iii typed by marguerite tjader harris]. the bulwark: edited typescript (part i: chaps. iv [iii]-xxi), . - the bulwark: edited typescript (part ii: chaps. xxii-xliv), . - the bulwark: edited typescript (part ii: chaps. xlv-lii(xlvii); part iii: liii(?)), . the bulwark: edited typescript (part iii: chaps. xlviii-lxiv, finis), . - the bulwark: typesetting version (front matter; reviewer's proof; note by marguerite tjader harris). the bulwark: typesetting version (introduction, part i: chaps. - ). - the bulwark: typesetting version (part ii: chaps. - ). - the bulwark: typesetting version (part iii: chaps. - , finis). - the bulwark: book jackets. "the bulwark": u.s. state department radio script, presented , as a book review, sept. . - the bulwark: condensed version, published in  omnibook, july. the bulwark: condensed version in french ("le rempart') in  omnibook (paris: edition française, mars ). the bulwark: st galley proofs. the bulwark: st galley proofs, uncorrected. the bulwark: discarded typescript fragments from all versions; corrections by td, louise campbell, marguerite tjader harris. - u.  the stoic. box folder the stoic: publisher's summary of  the stoic and "the trilogy of desire"; list of persons, businesses, and places mentioned, . the stoic: notes on cowperwood and london subway system. - the stoic: summary of cowperwood. - the stoic: summary of berenice and aileen. the stoic: summary of ethel yerkes and gladys unger. the stoic: summary of all characters. the stoic: summary of settlement of cowperwood's property and affairs. the stoic: queries, m.e.l. on typescript, june ; note. the stoic: notes and clippings on book's characters and events. - the stoic: notes and clippings on book's characters and events. - the stoic: notes and clippings on book's characters and events. - the stoic: typed versions of some original notes in other folders. - the stoic: court records relating to the will of charles yerkes. the stoic: notes on architecture, furniture, art, musicians, books, writers, actors (for  the stoic ?). the stoic: miscellaneous. - the stoic:  national geographic with article on norway marked by td, july. the stoic: notes on characters and surviving manuscripts and typescripts by evelyn light. the stoic: auction catalogue of the charles t. yerkes art collection, . the stoic: supreme court brief on behalf of louis owsley, executor of charles yerkes; note. the stoic: housman et al. v. owsley, brief for plaintiffs, . the stoic: housman et al. v. owsley, referee's opinion, . the stoic: early ms (chaps. i-x, versions each of chaps. , , ); some dictated by td to clara clark(?); see chaps. xvi (third version), xvii, xviii. - the stoic: st, nd, and rd early typescripts, revised (chap. x). - the stoic: ms (chap. xi). - the stoic: st and nd early typescripts, revised (chaps. xi, xii). - the stoic: ms (chap. xiv). the stoic: early typescript (chap. xv[xiv?]). the stoic: ms (chap. xv). the stoic: st and nd(?) early typescript (chap. xv). - the stoic: ms (chap. xvi). - the stoic: ms (chaps. xvii-xxv). - the stoic: early revised typescript (chap. xxxvi). the stoic: ms (chap. xxxvi). - the stoic: ms (chap. xxxvii). - the stoic: ms (chap. xxxviii). - the stoic: ms (chap. xxix). the stoic: ms (chap. xl); note from td. the stoic: ms (chaps. xli, ). - the stoic: early revised typescript (chap. xliii). the stoic: ms (chaps. xliiii-xlviiii). - the stoic: ms (chaps. li-liv). - the stoic: typescript a (chaps. i- , no chap. ) with corrections by td, helen dreiser, and louise campbell. - the stoic: typescript a carbon, with corrections (chaps. i- , no chap. ). - the stoic: typescript b (chaps. i- ) with corrections by td and helen dreiser. - the stoic: corrected typescript b (chaps. - ) p.s. concerning good and evil, with corrections by td and helen dreiser. - the stoic: typescript edited by anna tatum (chaps. i- , no chaps. , ). - the stoic: louise campbell typescript (chaps. - , no chap. ) p.s. concerning good and evil, with revisions by lc, helen dreiser, and?. - the stoic: (chap. ) prepared by helen dreiser from notes by td(?); chap. fragments. the stoic: revised louise campbell typescript, typed by her (chaps. - ). - the stoic: revised louise campbell typescript, typed by her (chaps. - ). - the stoic: typesetting copy (chaps. - , appendix). - the stoic: synopsis. the stoic: literary criticism written for publicity? (ms in helen dreiser's handwriting). the stoic: galley proofs, with corrections by helen dreiser, . the stoic: front matter and page proofs, with corrections by helen dreiser, . the stoic: discarded fragments and chaps. from various versions. - the stoic: early chaps. edited by louise campbell. - v.  philosophical notes. arrangement td's outline of categories for this material has been followed, but his original order of papers within the categories cannot be reconstructed, because the papers have been reorganized by at least two people since his death: sydney horovitz and marguerite tjader harris. some of the material in these folders has been typed and annotated by harris. the early folders within each category contain the material that she selected for use in her book notes on life (see boxes - ). td's long manuscripts in each category have been placed at the beginning of their respective categories, preceding the notes and clippings. box folder philosophical notes: notes and outlines by sydney horovitz, . philosophical notes: td's outlines. philosophical notes: introduction by john cowper powys. philosophical notes: early articles expressing td's philosophy: "the force of a great religion" and "what i believe," note by marguerite tjader harris. philosophical notes: i . mechanism called the universe, "mechanism called the universe". philosophical notes: i . mechanism called the universe, "the mighty atom". philosophical notes: i . mechanism called the universe, notes, clippings, mss. - philosophical notes: i . mechanism called the universe, notes, clippings, mss. - philosophical notes: i . mechanism called the universe, notes, clippings, mss. - philosophical notes: i . mechanism called life, notes, clippings, mss. - philosophical notes: i . mechanism called life, notes, clippings, mss. - philosophical notes: i . mechanism called life, notes, clippings, mss. - philosophical notes: i . necessity for repetition, notes, clippings, mss. philosophical notes: i . material base of form—"the problem of form". philosophical notes: i . material base of form, outline and notes for an essay on form; note from marguerite tjader harris. - philosophical notes: i . material base of form, notes, clippings. - philosophical notes: i . material base of form, notes, clippings, mss. - philosophical notes: i . the factor called time, notes, clippings, mss. - philosophical notes: i . the factor called chance, notes, clippings, mss. - philosophical notes: i . the factor called chance, notes, clippings, mss. - philosophical notes: i . weights and measures, notes, clippings, mss. - philosophical notes: i . mechanism called man, "you, the phantom," typescript, note, and printed version. philosophical notes: i . mechanism called man, notes, clippings, mss. - philosophical notes: i . mechanism called man, notes, clippings, mss. - philosophical notes: i . mechanism called man, notes, clippings, mss. - philosophical notes: i . physical and chemical character of his actions, "us". philosophical notes: i . physical and chemical character of his actions, notes, clippings, mss. - philosophical notes: i . mechanism called mind, notes, clippings, mss. - philosophical notes: i . mechanism called mind, notes, clippings, mss. - philosophical notes: i . mechanism called mind, notes, clippings, mss. - philosophical notes: i . the emotions, notes, clippings, mss. - philosophical notes: i . the emotions, notes, clippings, mss. - philosophical notes: i . the so-called progress of mind, notes, clippings, mss. - philosophical notes: i . mechanism called memory, notes, clippings, mss. - philosophical notes: i . myth of individuality—"the myth of individuality". philosophical notes: i . myth of individuality, notes, clippings, mss. - philosophical notes: i . myth of individual thinking, "it". philosophical notes: i . myth of individual thinking, notes, clippings, mss. - philosophical notes: i . myth of individual thinking, notes, clippings, mss. - philosophical notes: i . myth of free will"—suggesting the possible substructure of ethics," "old" typescript and "new" typescript. - philosophical notes: i . myth of free will, notes, clippings, mss. - philosophical notes: i . myth of individual creative power—"myth of the creative mind". - philosophical notes: i . myth of individual creative power, notes, clippings, mss. - philosophical notes: i . myth of individual creative power, notes, clippings, mss. - philosophical notes: i . myth of individual possession. - philosophical notes: i . myth of individual possession, notes, clippings, mss. - philosophical notes: i . myth of individual responsibility,"if man is free, so is all matter". philosophical notes: i . myth of individual responsibility, "kismet". philosophical notes: i . myth of individual responsibility, "responsibility". philosophical notes: i . myth of individual responsibility, notes, clippings, mss. - philosophical notes: i . myth of individual and race memory, notes, clippings, mss. - philosophical notes: i . the force called illusion, "concerning mycteroperca bonaci". philosophical notes: i . the force called illusion, "man and romance". philosophical notes: i . the force called illusion—"the myth of reality". - philosophical notes: i . the force called illusion, notes, clippings, mss. - philosophical notes: i . the force called illusion, notes, clippings, mss. - philosophical notes: i . varieties of force, "the force of a great religion". philosophical notes: i . varieties of force, "on the dreams of our childhood". philosophical notes: i . varieties of force, "some additional comments on the life force, or god". philosophical notes: i . varieties of force, notes, clippings, mss. - philosophical notes: i . varieties of force, notes, clippings, mss. - philosophical notes: i . transmutation of personality—"transmutation of personality". - philosophical notes: i . transmutation of personality, notes, clippings, mss. - philosophical notes: i . the problem of genius, notes, clippings, mss. - philosophical notes: ii . the theory that life is a game, notes, clippings, mss. - philosophical notes: ii . special and favoring phases of the solar system, notes, clippings, mss. philosophical notes: ii . necessity for contrast, "peace and war". philosophical notes: ii . necessity for contrast, notes, clippings, mss. - philosophical notes: ii . the necessity for limitation—"concerning the multiplicity of things". philosophical notes: ii . the necessity for limitation, notes, clippings, mss. - philosophical notes: ii . the necessity for change, "change". philosophical notes: ii . the necessity for change, notes, clippings, mss. - philosophical notes: ii . the necessity for interest and reward, notes, clippings, mss. - philosophical notes: ii . the necessity for ignorance, notes, clippings, mss. - philosophical notes: ii . the necessity for secrecy, notes, clippings, mss. - philosophical notes: ii . the necessity for youth and age, old and new, notes, clippings, mss. philosophical notes: ii . scarcity and plenty, notes, clippings, mss. - philosophical notes: ii . strength and weakness—"the strong and the weak". philosophical notes: ii . strength and weakness, notes, clippings, mss. - philosophical notes: ii . courage and fear, "courage and fear". - philosophical notes: ii . courage and fear, notes, clippings, mss. - philosophical notes: ii . mercy and cruelty, "the right to kill". philosophical notes: ii . mercy and cruelty, notes, clippings, mss. - philosophical notes: ii . beauty and ugliness, general plan, outline, notes, and partial early typescript for an essay on beauty. philosophical notes: ii . beauty and ugliness, "the problem of beauty". philosophical notes: ii . beauty and ugliness, "the problem of beauty". philosophical notes: ii . beauty and ugliness, "the value of beauty". philosophical notes: ii . beauty and ugliness, notes, clippings, mss. - philosophical notes: ii . order and disorder, notes, clippings, mss. - philosophical notes: ii . good and evil, "can there be good in evil". philosophical notes: ii . good and evil,"concerning good and evil". philosophical notes: ii . good and evil,"concerning good and evil," note from helen dreiser. philosophical notes: ii . good and evil, "good and evil". philosophical notes: ii . good and evil, "good and evil," typescript a. philosophical notes: ii . good and evil, "good and evil," typescript b. philosophical notes: ii . good and evil,"good and evil," typescipt b revised [by william lengel?]. philosophical notes: ii . good and evil, "good and evil," typescripts c and d. - philosophical notes: ii . good and evil, "good and evil," typescript e. philosophical notes: ii . good and evil, notes, clippings, mss. - philosophical notes: ii . good and evil, notes, clippings, mss. - philosophical notes: ii . problem of knowledge—"education". philosophical notes: ii . problem of knowledge, notes, clippings, mss. - philosophical notes: ii . problem of knowledge, notes, clippings, mss. - philosophical notes: ii . problem of knowledge, notes, clippings, mss. - philosophical notes: ii . the equation called morality, notes, clippings, mss. - philosophical notes: ii . the equation called morality, notes, clippings, mss. - philosophical notes: ii . the compromise called justice—"the ultimate justice of life". - philosophical notes: ii . the compromise called justice, notes, clippings, mss. - philosophical notes: ii . the salve called religion—"religion—theory—dogma". philosophical notes: ii . the slave called religion—"saving the world". philosophical notes: ii . the salve called religion, notes, clippings, mss. - philosophical notes: ii . the salve called religion, notes, clippings, mss. - philosophical notes: ii . the problem of progress and purpose, notes, clippings, mss. - philosophical notes: ii . the problem of progress and purpose, notes, clippings, mss. - philosophical notes: ii . the problem of progress and purpose, notes, clippings, mss. - philosophical notes: ii . the myth of the perfect social order, notes, clippings, mss. - philosophical notes: ii . the myth of the perfect social order, notes, clippings, mss. - philosophical notes: ii . the essential tragedy of life—"a counsel to perfection". - philosophical notes: ii . the essential tragedy of life—"the essential tragedy of life". - philosophical notes: ii . the essential tragedy of life, notes, clippings, mss. philosophical notes: ii . the problem of death—"life after death". philosophical notes: ii . the problem of death, notes, clippings, mss. - philosophical notes: ii . equation inevitable—"equation inevitable" (parts , , v). - philosophical notes: ii . equation inevitable—"equation inevitable: a variant in philosophic viewpoint" (typescript a, typescript b, revised typescript b). - philosophical notes: ii . equation inevitable, notes, clippings, mss. - philosophical notes: ii . laughter, "an address all to electrons, protons, neutrons, deutrons, quantums". philosophical notes: ii . laughter, "an address all to electrons, protons, neutrons, deutrons, quantums". philosophical notes: ii . laughter, notes, clippings, mss. - philosophical notes: ii . music, notes, clippings, mss. - philosophical notes: "my creator", nov. . philosophical notes: "my creator", oct. philosophical notes: "my creator" inscribed by myrtle butcher, nov. ; corrections on typescript by helen dreiser. philosophical notes: td's notebook containing handwritten selections from many categories. philosophical notes: art and science, notes, clippings, mss. philosophical notes: medicine, notes, clippings, mss. - philosophical notes: the myth of complete understanding, notes. philosophical notes: the myth of pure reason, notes. philosophical notes: necessity for union, notes, clippings, mss. philosophical notes: on friendship, notes. philosophical notes: on the credibility of the senses, notes. philosophical notes: pleasure and pain, notes, clippings, mss. - philosophical notes: the wisdom of the unconscious, notes, clippings. philosophical notes: notes from the vedas and the upanishads. - philosophical notes: unclassified notes (menninger). - philosophical notes: unclassified notes (dr. wm. j. robinson). - philosophical notes: unclassified notes (wm. moulton marston, "monkey thinking"). philosophical notes: unclassified notes (henry thomas,  the story of the human race). - philosophical notes: unclassified notes (robert chambers,  the life of the cell). philosophical notes: unclassified notes (  riddle of the universe). - philosophical notes: unclassified notes (remy de gourmant). - philosophical notes: unclassified notes (  green laurels). philosophical notes: unclassified notes (loeb). - philosophical notes: unclassified notes ("lesson no. : the nature of the human animal"). philosophical notes: unclassified notes (  data of ethics). philosophical notes: unclassified notes (henry adams, "the rule of phase applied to history"). philosophical notes: unclassified notes (crile). - philosophical notes: unclassified notes (carrel). - philosophical notes: unclassified notes (william james,  a pluralistic universe). - philosophical notes: unclassified notes (townsend). philosophical notes: unclassified notes (jules de gaultier,  bovarism). - philosophical notes: unclassified notes (thomas henry huxley,  essays selected from lay sermons). - philosophical notes: unclassified notes (august strindberg,  zones of the spirit). philosophical notes: unclassified notes (gustave le bon,  the crowd). - philosophical notes: unclassified notes (oliver lodge,  ether and reality). - philosophical notes: unclassified notes (  man, the unknown). - philosophical notes: unclassified notes (  outposts of science). philosophical notes: unclassified notes (  march of science). philosophical notes: unclassified notes (schrodinger). philosophical notes: unclassified notes (clendening). - philosophical notes: unclassified notes (sigmund freud,  the future of an illusion). philosophical notes: unclassified notes (robert a. millikan,  time, matter, and values). philosophical notes: unclassified notes (lemon,  from galileo to cosmic rays). - philosophical notes: unclassified notes (p. w. bridgman,  the logic of modern physics). - philosophical notes: unclassified notes. - philosophical notes: unclassified notes. - philosophical notes: reprints by dr. albert f. blakeslee: "demonstration of differences between people in the sense of smell" and "a dinner demonstration of threshold differences in taste and smell", . philosophical notes: a. a. brill, "the psychopathology of noise," ; "the psychopathology of selections of vocations," . philosophical notes: c. l. christensen, "man and woman in prehistory," edwin g. conklin, "a generation's progress in the study of evolution," . philosophical notes: sigmund freud, "three contributions to the theory of sex", . philosophical notes: basil c. h. harvey, "the nature of vital processes according to rignano", . philosophical notes: purl holzer,  mind and consciousness, v. , . philosophical notes: jacques loeb, "the mechanistic conception of life", . philosophical notes: j. w. miller, "accidents will happen," and "the paradox of cause," thomas hunt morgan, "the relation of genetics to physiology and medicine," . philosophical notes: oscar riddle, "the confusion of tongues," and "the relative claims of natural science and of social studies to a core place in the secondary school c urriculum: a.—for natural science," . philosophical notes: wm. seifriz, "the structure of protoplasm," h. riley spitler, "some circulatory changes caused by ocular fixation of selected light frequencies in t he visible range," . philosophical notes: leonard thompson troland, "the chemical origin and regulation of life", . philosophical notes: arthur waley, "zen buddhism and its relation to art", . w.  notes on life. box folder notes on life: "memo on a project for editing dreiser's  notes on life, " by marguerite tjader harris, submitted to the university of pennsylvania dreiser committee,, march . notes on life: report of the material taken from the university of pennsylvania library in by m. t. harris, aug. . notes on life: readers' reports. notes on life: td's outline, annotated by m. t. harris. notes on life: miscellaneous notes re contents of book and introductory statements by m. t. harris. notes on life: "editorial report," by m. tjader. notes on life: "editorial report," by m. tjader and john mcaleer. notes on life: notes by dr. frank muhlfeld; note to muhlfeld from m. t. harris. notes on life: editor's foreword by m. tjader,, april. notes on life: end notes and letter to m. t. harris, dec. . notes on life: tentative rough draft and outline (part i); introductory material, mechanism called the universe, mechanism called life, summer-autumn. notes on life: necessity for repetition, material base of form, factor called time. notes on life: factor called chance, weights and measures, mechanism called man. notes on life: physical and chemical character of his actions, mechanism called mind. notes on life: the emotions, the so-called progress of mind, mechanism called memory. notes on life: myth of individuality, myth of individual thinking, myth of free will. notes on life: myth of individual creative power, myth of individual possession, myth of individual responsibility. notes on life: myth of individual and race memory, the force called illusion. notes on life: varieties of force. notes on life: transmutation of personality, the problem of genius. notes on life: part ii: theory that life is a game, special and favoring phases of the solar system. notes on life: necessity for contrast, necessity for limitation, necessity for change. notes on life: necessity for interest and reward; necessity for ignorance; necessity for secrecy; necessity for youth and age, old and new. notes on life: scarcity and plenty, strength and weakness, courage and fear, mercy and cruelty. notes on life: beauty and ugliness, order and disorder, good and evil. notes on life: problem of knowledge, equation called morality, compromise called justice. notes on life: salve called religion, problem of progress and purpose, myth of a perfect social order. notes on life: essential tragedy of life, problem of death. notes on life: equation inevitable. notes on life: laughter, music. notes on life: typescript sent to m. t. harris's agent. - notes on life: edited by marguerite tjader harris and john mcaleer. - notes on life, edited by marguerite tjader and john mcaleer. - x.  an amateur laborer. box folder an amateur laborer: note from td; fragment from chap. i. an amateur laborer: "the cruise of the idlewild". an amateur laborer: "the mighty burke". an amateur laborer: "the toil of the laborer". an amateur laborer (chaps. i-xxiii). - an amateur laborer: (chaps. xxiii-xxv). - an amateur laborer: ms fragments. - an amateur laborer (pa. ed.): the pennsylvania edition, contents, acknowledgments, preface. an amateur laborer (pa. ed.): introduction by richard w. dowell. an amateur laborer (pa. ed.): editorial principles by james l. w. west iii. an amateur laborer (pa. ed.): textual apparatus. an amateur laborer (pa. ed.) (chaps. i-xxv). - an amateur laborer (pa. ed.): fragments. - an amateur laborer (pa. ed.): explanatory notes. an amateur laborer (pa. ed.): illustration page, word division, design specifications. an amateur laborer (pa. ed.): fragments not used in book. - return to top » v.  td writings: essays. series description this series includes dreiser's published and unpublished essays, reviews, and letters to the editor. some photostats of articles that dreiser wrote as a newspaper reporter are filed here as well; printed versions of other dreiser newspaper articles are located in the clippings file or on microfilm. in addition, essays for series developed by dreiser, whether written by him or by someone else, are housed here. they are collected together under the series title (e.g., "baa! baa! black sheep," "i remember, i remember"). the essay title and author are listed on the folder. the order of filing the holdings for each essay is the same as that followed in td writings: books: notes, manuscripts, typescripts, proofs, and printed versions. for published essays, the journal and year of first publication are noted on the folder. the essays are filed alphabetically by the title on the first page of the essay; the title used for publication is also noted on the folder with the other publication information when it differs from the first-page title. if the publication title is radically different from the original title, researchers can find in appendix a a cross-reference under the publication title to the essay's title in the collection. some of dreiser's published essays were later included in his nonfiction book publications: a traveler at forty,  twelve men,  hey rub-a-dub-dub,  newspaper days (a book about myself),  the color of a great city,  dreiser looks at russia,  a gallery of women,  my citye>, and america is worth saving. researchers interested in some of these essays should check for holdings in both td writings: books and td writings: essays, because versions of the essay may be found in both locations. box folder a. - "baa! baa! black sheep" series for esquire. - bal - com. - con - el. - em - go. - gr - h. - i - "i find...". - "i remember! i remember!" series - is. - it - l. - ma. - me - on. - ou - p. - r. - s - "this florida...". - "this madness:" "aglaia"; "elizabeth". - "this madness:" "sidonie". - "this madness:" "camilla". - "this madness:" "aglaia," "elizabeth," "sidonie". - tho - "why help...". - "why i..." - z and untitled. - return to top » vi.  td writings: short stories. series description dreiser wrote many more short stories than were ever published and started many stories that he never completed. he often recorded and filed ideas for them: sometimes a title with a plot summary, sometimes only a title. friends and researchers that he employed would also send him newspaper clippings describing crimes with an unusual psychological twist and inexplicable events involving humans or phenomena in the natural world: he collected and filed such information under "ideas for stories." also included are clippings that describe crimes that dreiser considered using as the basis for what would later become an american tragedy. the first boxes contain all completed and unfinished short stories (arranged alphabetically), including those consisting only of a title and plot summary. [ appendix b comprises an alphabetical li st of the short stories.] filed next are two boxes of ideas for short stories; they contain lists of titles only or clippings that he collected or were sent to him. as in the previous series, the order of arrangement for the manuscripts for each title is chronological: notes, manuscripts, typescripts, proofs, and printed version. first publication data are noted on the folder of published stories. box folder a - d. - e - hei. - her - lo. - ly - p. - r - s. - t - z and untitled. - ideas for short stories. - ideas for short stories (wynkoop murder case). - return to top » vii.  td writings: poems. series description because poems are filed in two locations in the dreiser papers, researchers should check both in this series and in td writings: books under " moods" ( boxes - ). copies or versions of some poems are found in both locations. dreiser began writing poetry in the s and continued throughout his lifetime; the collection contains poems from the entire period. i n boxes through the poems are arranged alphabetically by title. this grouping includes poems written by dreiser but scored for music by someone else: they are filed under the title of the poem, with the name of the composer of the music listed on the folder. boxes and contain selections of dreiser's poems, chosen by dreiser and others, on particular themes or for specific purposes. [appendix ccomprises an alphabetical list of the poems.] box folder a - for. - fou - l. - m - q. - r - y. - selected poems for a small book of poetry. - rhymed verse. - selection of poems by td for?. - "sonnets in recollection". verses, . selection of poems typed by?. description for inclusion in robert palmer saalbach, selected poems from moods  by theodore dreiser, ? poems by td translated into german by f. c. steinermayr and lind goldschmidt. - poems by td typed by estelle kubitz. - return to top » viii.  td writings: plays. series description one of dreiser's first pieces of creative writing was a playscript, jeremiah i, which is in this collection. dreiser enjoyed writing plays and often had ideas for playscripts, which he would briefly summarize with the i ntent of developing them later. sometimes he collaborated with another person in translating his idea into a playscript. this series contains both fully developed playscripts and dreiser's ideas for plays, arranged alphabetically. some of dreiser's pla ys were scored for music, in which case the play is filed under its title and the name of the composer is listed on the folder. in addition to the plays in this series, the researcher should see boxes - , which contain playscripts of  the "genius," some of which were written by dreiser. [appendix d comprises an alphabetical list of the plays.] box folder a - c. - d - j. - l - z and untitled fragments. - return to top » ix.  td writings: screenplays and radio scripts. series description even before his arrival in california in , dreiser had been impressed by the popularity of motion pictures and by the size of the potential audience for movies compared with that for books. he believed that screenwriting could boost his income dra matically. in addition to creating new screenplays, dreiser also saw possibilities for screen adaptations of his novels and short stories. during his lifetime, motion pictures versions of an american tragedy, jennie gerhardte>, and my gal sal were produced, although dreiser himself did not write any of these screenplays. dreiser encouraged other writers who wanted to adapt his novels and short stories. in fact, he often worked with other wri ters on screenplays: he presented an idea or a plot and his collaborator translated it into an actual screenplay. he followed a similar pattern with radio scripts. no screenplays written by dreiser were ever produced. this series includes ( ) screenplays and radio scripts written by dreiser, ( ) those written by a collaborator based on an idea by dreiser, and ( ) dreiser's ideas for screenplays that were never developed. the file on "revolt or tobacco" also include s notes and clippings on the tobacco industry and photographs from a field trip to tennessee that were used as background material in writing the script, as well as incorporation papers and bylaws for super pictures, inc., the company created to produce t he movie. [ appendix e comprises an alphabetical list of the screenplays and radio scripts.] box folder a - k. - l - p. - "revolt or tobacco". - "revolt or tobacco". - "revolt or tobacco". note see also box , folder for reviews of borden deal's book, the tobacco men, which was based on td's notes for this screenplay. - s - z and untitled. - return to top » x.  td writings: addresses, lectures, interviews. series description the writings in this series are filed chronologically. some addresses and interviews were published; thus, the holdings in this series range from notes to printed versions. dreiser received many requests for interviews and for answers to specific que stions. after replying, he often filed these requests under "questions and answers" without indicating the source or the date. if the year can be determined or estimated approximately, the material is filed using that year; if not, the material is filed at the end of the chronologically arranged folders. box folder - . - miscellaneous questions and answers, - . - return to top » xi.  td writings: introductions, prefaces. series description writings in this series include everything from research notes to printed versions and range in length from a few paragraphs to a long essay. in addition to traditional introductions to books, dreiser wrote introductory material for catalogs of painti ngs, new literary journals, labor pamphlets, and film series. notes for the introductions of harlan miners speak and  the living thoughts of thoreau are extensive and varied in character; some of them were collected by others but annotated by dreiser. box folder - . - td's introduction to harlan miners speak, . - may . - nov.- . - return to top » xii.  journals edited by td. series description before his novel-writing career really took hold, dreiser was editor of ev'ry month,   smith's magazine,   broadway magazine,   the delineator , and  bohemian magazine. in the s, when he became more involved in political issues, he agreed to be an editor of  american spectator. holdings in this series include some notes, financial data, production material, and proposed articles for broadway magazine, bohemian magazine, and  american spectator; they also include som e issues of  ev'ry month, broadway magazine, bohemian magazine, and  american spectator. researchers interested in dreiser's career at  the delineator should also se e folder (box ) and box , which contains a scrapbook of clippings documenting dreiser's editorship of this journal. box folder notes: contents and cost sheets for the issues of broadway magazine, july and august. notes: production material and proposed articles for bohemian magazine. - notes: american spectator: new york times editorial, ; policy statements; potential contributors, oct. . notes: american spectator: ideas for articles. notes: american spectator: suggestions for articles. notes: american spectator: articles written and expected. notes: american spectator: comments re contributors or articles from evelyn light to td. notes: american spectator: "the editors believe" material. notes: american spectator: material submitted for publication. - notes: american spectator: information on distribution, advertising, printing, and financial matters supplied to td by evelyn light. notes: american spectator: radio broadcast, . notes: american spectator: miscellaneous. copies: ev'ry month, october. copies: ev'ry month, nov-dec. - copies: ev'ry month, jan. copies: ev'ry month, march-may . - copies: ev'ry month, nov-dec. - copies: ev'ry month, march . copies: ev'ry month, april- may. copies: ev'ry month, june- may. copies: broadway magazine, . copies: bohemian magazine, . copies: american spectator, nov.- oct. note these copies are very fragile. return to top » xiii.  notes written and compiled by td. series description dreiser's note-taking habits probably began during his days as a newspaper reporter. he took notes (or hired others to do so), kept diaries, and collected clippings as an aide-mémoire for his writing projects. dreiser's habit was to file the notes wit h the relevant manuscripts and typescripts for a piece of writing, and his practice has been followed in organizing this collection. notes on the life and career of charles yerkes, for example, are housed with the manuscripts for t he financier, the titan, and  the stoic, because they were an integral source of information for the writing of those works. the material filed in this series indicates the breadth of dreiser's interests and concerns and the kinds of sources that he consulted when doing research. the notes in this series may have been collected with particular projects in mind that were nev er written or published; they may represent information dreiser wanted for general purposes; they may have been kept by chance or for idiosyncratic reasons. they probably had multiple uses: what dreiser labeled "notes on the american scene" and "capital and labor" might have been used in any number of his political writings in the s and s, including his book tragic america. notes are filed alphabetically by subject, so researchers should check the container list fo r topics of interest. the quantity of notes on any subject varies from a paragraph to more than a box. because of the fragmentary nature of the holdings, the categories "novels, proposed" and "novels, unfinished" are housed in this series rather than in td writings: books. one of the unfinished novels, "the rake," was dreiser's early attempt to write what eventually became an american tragedy. dreiser collected clippings and notes and wrote a prologue and several chapters for this work but decided at some point that this was not the story that he wanted to write. a.  notes: a - cap. box folder notes on the american scene: includes notes on political parties, corporations, charity, banks, revision of the new york constitution [many of these notes probably were collected for the writing of tragic america ]. - notes on amnesia; idea for a story about an amnesia victim. notes on td's books. notes on capital and labor (many of these notes were probably collected for the writing of tragic america). - b.  notes: cap. box folder notes on capital and labor. - notes on capital and labor: united states v. haywood et al., aug. - . - c.  notes on the catholic church. box folder "sex". "adultery, the church and law", after . "the catholic church and the labor movement," by david j. saposs. "catholics in education": outline and division into chapters by esther mccoy(?). "catholic's progress," by ?. miscellaneous notes on the catholic church. - "the church and double-quick time". version of "the church and wealth in america" in tragic america. "church support in the u.s.," from a thesis by michael n. kremer. "church support in the united states". "church support in the united states," by michael n. kremer. "concerning mr. guthrie's opinion on church and state in mexico," by charles c. marshall. "the holy roman church". letters re the catholic church. "my quarrel with the catholic church". "a roman catholic and the presidency," by charles c. marshall. "the roman catholic church as a business and political organization," by ?. "simony: an historical synopsis and commentary," by rev. raymond a. ryder. "the support of the catholic church" restatement of data from "church support in the united states," by michael n. kremer. d.  notes: ce - l. box box folder notes on censorship. notes on dictatorship: european, central and south american countries, and u.s. notes on dreams: accounts of td's dreams. - notes and articles re the federal arts program. - notes on and by charles fort; autobiographical statement; list of his writings; reviews of his works; fort memorabilia. - notes on germany. notes on emma goldman. - notes on alexander hamilton, grover cleveland, and james g. blaine. notes on insurance by ?. notes on interdependence. notes on japan, - . notes on the jewish question. notes for an article on los angeles. - e.  notes: m - n. box folder notes on the mechanics & traders-union bank scandal, brooklyn,, - . - notes on music. lists of names and word substitutions. novels, proposed: outlines. novels, unfinished: "mea culpa". - novels, unfinished: "our neighborhood: a book of present day life," by c. t. allison (written in td's hand: foreword; chaps. i, ii, iii). note see also "hollywood now," box . - f.  notes: n. box folder novels, unfinished: "the rake": list of incidents; prologue; chaps. (some incomplete); notes; related clippings. - g.  notes: o - p. box folder ouija board notes. notes on philosophers. notes on philosophy and science typed by estelle kubitz. - notes on production and machinery taken from howard scott of technocracy. h.  notes: r - z. box folder td's notes on reading. - notes on realism and other literature. notes on russia, - . - notes on russian writers. notes on relief for spain; copies of the war in spain, ; copies of  voice of spain, . miscellaneous notes. philadelphia diary: prescriptions, - . description xerox of originals at lilly library, univ. of indiana. philadelphia diary, oct. - feb. . - philadelphia diary: explanatory letters and transcription by neda westlake for entries for, oct. - feb. . - return to top » xiv.  td diaries. series description dreiser kept two types of diaries at irregular intervals throughout his lifetime: the kind that noted his daily activities, thoughts, and contacts and the kind that recorded events, people, places, and reflections that he intended to use in a piece of writing. this series contains the former type of diary; examples of the latter are housed with the published work that they helped to generate. for example, the diaries from dreiser's european tour in - , used while to write a traveler at forty, are stored with the typescripts for that book; likewise, the diary that dreiser kept on his trip to russia in - is located with the typescripts for  dreiser looks at russia. dreiser's private diaries contain more than pages of notes; he often pasted in postcards, prescriptions for medicine, letters, menus, and souvenirs. sometimes he made drawings of certain architectural details or designs that he liked. at the end of t he container list for this series is a note regarding the location of other diaries in the collection. box folder diary fragments, - . savannah diary, jan.- feb. - savannah diary: transcription by neda westlake for entries for, jan.- feb. greenwich village diary: xerox of letters establishing provenance of diary; entries for, may - march . - indiana diary, june -july . - diary of trip to grove and asbury park, new jersey:, july - . helen diary, july - july . - florida diary, maps, bills, guides, telegrams, miscellaneous, - . florida diary, dec. - jan. . - florida diary: copy of sunland magazine, jan. florida diary: newspaper clippings re real estate development in florida, dec. , , ; jan. . - european diary, june -oct. . theodore dreiser: american diaries, (thomas p. riggio, editor; james l. w. west iii, textual editor) (philadelphia: university of pennsylvania press, ): suggested illustrations, - , . american diaries (pa. ed.): copies of correspondence re publication. american diaries (pa. ed.): front matter. american diaries (pa. ed.): introduction by riggio. - american diaries (pa. ed.): editorial principles by west. american diaries (pa. ed.): philadelphia diary; notes, oct. - feb. . - american diaries (pa. ed.): savannah diary; notes, . american diaries (pa. ed.): greenwich village diary; notes, may - march . - american diaries (pa. ed.): home to indiana; notes, . american diaries (pa. ed.): a trip to the jersey shore; notes, . american diaries (pa. ed.): helen, hollywood, and the  tragedy; notes, july - july . - american diaries (pa. ed.): motoring to florid; notes, dec. - jan. . - american diaries (pa. ed.): appendix—diary fragments, - . american diaries (pa. ed.): textual apparatus. note for other td diaries, see boxes , , (european diary, - , used in writing a traveler at forty); box (diary notes for  a hoosier holiday); and box (russian diary, - , used in writing  dreiser looks at russia). return to top » xv.  biographical material. series description this material is difficult to categorize, as it ranges from pages from the dreiser family bible to a copy of dreiser's memorial service on january . housed here, for example, are some short autobiographical works; biographical summaries by other s; lists of dreiser's writings, addresses, and places of employment; addresses of associates; papers and books stored in warehouses; personal manuscripts for sale; invitees to a simon & schuster reception at mt. kisco; and awards. the container list provides more details. box folder pages from dreiser family bible; title page from dawn. list of td domiciles and places of employment. "a dreiser chronology," by john g. moore, feb. . autobiographical sketch by td for household magazine, nov. td's account of his life for eric possell, march . list of td's writings in various forms and their owners as of (?); later lists of td manuscripts for auction, . list of td's magazine articles and other writings. writings by or about td in the state library, salem, oregon, after . accident reports: td hit by auto and auto accident involving td, helen richardson, and clara clark, , . list of invitees for simon & schuster reception for td at iroki, mt. kisco, n.y., oct. td address list. miscellaneous addresses of td associates. biographies of td in reference books. miscellaneous biographical data. press release announcing td's appointment as editor of the delineator. td's plan for making money after being fired from the delineator(?). td horoscopes. td's proposal for a society to help young authors, (?), jan. . "a literary apprenticeship," autobiographical ms (incomplete) and notes; notes for an autobiographical work, "literary experience". architect's sketches of iroki [td's mt. kisco home], ; advertisement for sale of iroki; directions to iroki; furniture advertisement with note from evelyn light, march . note see box , folder , for map of mt. kisco. inventory of td's papers at mt. kisco and manhattan storage, . inventory of td's papers at mt. kisco and manhattan storage, revised later by td and helen dreiser, . inventory of td material at manhattan storage, annotated by helen dreiser and harriet bissell, . lists and receipts of transfers of material in storage at mt. kisco and manhattan storage, and other inventoried papers, - . miscellaneous lists. td awards; obituaries. memorial service for td, jan. . miscellaneous items re dreiser family members: edward dreiser, mary frances dreiser brennan, john paul dreiser. td notes and souvenirs from trips. note see box , folder , for souvenir map of big moose lake, new york. return to top » xvi.  family members. a.  paul dresser materials. description & arrangement this subseries begins with two boxes of theodore dreiser correspondence, which deals exclusively with business concerns related to the music of his brother, paul dresser. the first is correspondence between dreiser and several music publishing firms (i.e., paul dresser music, richmond music, edward b. marks, and paull-pioneer). the second houses correspondence with theodore and helen dreiser from many private and corporate correspondents concerning the making of the movie about paul dresser's life, my gal sal (this box is arranged chronologically). the remainder of the material comprises: paul dresser sheet music, filed alphabetically by title, with miscellaneous sheet music and lyric sheets following ( boxes, a list of titles of these works may be found in appendix f); a scrapbook of articles related to paul dresser ( box); paul dresser memorabilia and clippings ( box); two plays written by paul dresser,  after many years and  timothy and clover ( / box); and dresser memorabilia collected by paul gormley, including photos, clippings and cards ( / box). box folder biographical information on paul dresser, written by td. td correspondence pertaining to paul dresser music. - td correspondence pertaining to my gal sal. - paul dresser sheet music: original board; "after the battle," "her tears drifted out with the tide". - paul dresser sheet music: "i long to hear from home," "the old flame flickers and i wonder why". - paul dresser sheet music: "on the banks of the wabash far away," "you're just a little nigger..." miscellaneous sheet music, lyric sheets. - paul dresser scrapbook. - paul dresser memorabilia and clippings. - - paul dresser material: paul gormley's collected memorabilia; plays: "after many years," "timothy and clover". - - b.  helen dreiser diaries and other writings. description & arrangement because the theodore dreiser papers contains so much material by and about helen, and because she and dreiser were associated for so many years in a business as well as a personal relationship, her writings have been gathered in a separate series. in addition to helen dreiser's daybooks, kept between and , this series contains typescripts and notes from her my life with dreiser ( ) and a movie script for a sequel to  my gal sal--"sal o' my heart." helen sometimes worked with dreiser on screenplays; her work is housed with dreiser's writings when she adapts one of his works. see, for example, her screen adaptation of  sister carrie in box , folder and her work on  my gal sal in box . box folder helen dreiser's daybooks, - , - . - helen dreiser's daybooks, - . - helen dreiser's daybooks, - . - genealogical chart of patges lineage. miscellaneous notes and clippings. "journey eternelle". my life with dreiser (chaps. i-li, epilogue). - my life with dreiser (fragments from chaps. - ). - my life with dreiser, miscellaneous notes and corrections. - my life with dreiser, promotional material. helen richardson [dreiser] and lucile nelson, "the blessed damozel," synopsis for a movie, . - "a few notes on the dream, manuscript which was inspired by charles fort's first full length manuscript 'x'". "sal o' my heart," movie script, . "sal o' my heart," movie script with songs by clare kummer, . c.  vera dreiser correspondence. description & arrangement this material includes personal correspondence between vera dreiser and others, mainly concerning her two famous uncles, theodore dreiser and paul dresser. files are ordered alphabetically by correspondent and chronologically within each folder; incom ing and outgoing letters are interfiled. following the correspondence are a few subject folders; they comprise: articles and information about dreiser; vera's diary concerning theodore; dreiser family history; notes concerning paul dresser; and memorabilia. box folder correspondents a - p. - correspondents r - z; miscellaneous notes; memorabilia. - return to top » xvii.  memorabilia. a.  scrapbooks. description & arrangement these scrapbooks were not all compiled by dreiser, but they all focus on his activities and interests. they are arranged chronologically, with the earliest scrapbook presenting reviews of sister carrie and the last one dash;kept by lorna smith between and —containing clippings and souvenirs of dreiser and helen. six scrapbooks hold reviews of dreiser's books. in addition to the one for sister carrie, there are scrapbooks for  a traveler at forty, the "genius", "twelve men,"  newspaper days (a book about myself), and  the color of a great city. the last four are book dummies filled with blank pages, onto which clippings of book reviews are pasted. hazel godwin kept a scrapbook of clippings regarding dreiser's visit to toronto in . helen dreiser compiled six scrapbooks between and that contained christmas and other holiday cards sent to dreiser and herself; clippings about dreiser's activities and speeches and world events; programs and other souvenirs; reviews of and music from  my gal sal; telegrams, cards, and letters that she received after dreiser's death; reviews of  the bulwark and  the stoic; and accounts of h er speeches and activities. scrapbooks covering dreiser's career with  the delineator, his activities between and and miscellaneous literary selections, and the all russian ballet project are also housed here. box folder sister carrie: scrapbook of reviews, - . sister carrie: folder of loose reviews found in scrapbook but not pasted in first page of scrapbook of letters, - . miscellaneous clippings re td at the delineator. a traveler at forty: clippings of reviews, - . scrapbook kept by kirah markham of writings, some by or about td, circa - . loose items found in scrapbook. book dummies of the "genius",  twelve men,  newspaper days (a book about my self), and  the color of a great city, each containing pasted-in reviews of the respective books, - . scrapbook kept by helen dreiser of clippings re td and current events, christmas cards, and souvenirs, - . all russian ballet, inc.: scrapbook empty except for letter to arthur carter hume , copy of woodcut of td, and few items relating its incorporation, nov. . scrapbook kept by helen dreiser of clippings re td and current events, reviews of my gal sal, souvenirs, and programs, - . scrapbook kept by hazel godwin re td's trip to toronto, canada,, october . scrapbook kept by helen dreiser of clippings re td and current events, music from and reviews of my gal sal, christmas and other holiday cards, programs, and souvenirs, - . scrapbook kept by helen dreiser of clippings re td and current events, programs, holiday cards, souvenirs, copies of her speeches about td, a few clippings re td's death, - . "the passing of theodore dreiser": scrapbook kept by helen dreiser, containing letters, telegrams, and cards from friends; clippings; and other memorabilia re the death of td. scrapbook kept by helen dreiser of clippings re td and his writings; some reviews of the bulwark and  the stoic and of books written about td, - . scrapbook kept by lorna smith with clippings and souvenirs re td and helen dreiser, - . b.  photographs. description the photographs (many of which may be viewed online) in this series range from informal snapshots to formal portraits and provide extensive documentation of the personal lives and careers of theodore and helen dreiser and vera dreiser scott (dreiser's niece). in addition to collecting in dividual photographs, helen compiled photograph albums that pictured her friends and relatives as well as her activities and travels with dreiser. all photographs in the collection are housed in this series with two exceptions: ( ) photographs that were enclosed with correspondence originally and that were still housed with that correspondence in and ( ) photographs that dreiser filed with research notes (these photographs have been left in place). theodore and helen dreiser, myrtle butcher (helen 's sister), vera dreiser scott, and ralph fabri are the major donors of photographs to the dreiser papers. this series comprises photographs of dreiser alone and with others; persons associated with dreiser; dreiser's parents and siblings; helen patges richardson dreiser, alone and with others; helen richardson's family album; photograph albums compiled by helen; dreiser residences; artistic representations of dreiser; edward dreiser, mai skelly dreiser, vera dreiser, and their friends and relatives; identifiable friends or associates of vera dreiser; and publicity photographs of associates of vera dreiser who were involved in musical or theatrical productions. in addition, there are photographs that have been used in publications about dreiser and to promote motion pictures based on his works. box folder photographs of td, - . photographs of td with others, - . photographs of persons associated with td. description does not including photographs of helen dreiser or of td's parents and siblings. photographs of td's parents and siblings. photographs of helen patges richardson dreiser, alone and with others, circa - . note photographs of helen with td can be found in boxes , , and . helen richardson family album, - . photo album compiled by helen richardson, , containing photographs of herself, td, friends, family, residences, and places visited, - . photograph albums compiled by helen richardson, , containing photos of herself, td, friends, family, residences, and places visited, - . photographs of dreiser residences, - . photographs of artistic representations of td. photographs that have been used in publications about td and to promote motion pictures based on his works. description illustrations from pennsylvania dreiser edition of sister carrie, an amateur laborer, theodore dreiser: american diaries, - , dreiser-mencken letters; motion picture stills from  jennie gerh ardt and  my gal sal. photographs that have been used in periodical publications re td or his writings. - photographs of edward dreiser, mai skelly dreiser, vera dreiser, and their friends and relatives, late s- . photographs of edward dreiser, mai skelly dreiser, vera dreiser, and their friends and relatives, - s. identifiable friends or associates of vera dreiser. publicity photographs of associates of vera dreiser who were involved in musical or theatrical productions, a - k. publicity photographs of associates of vera dreiser who were involved in musical or theatrical productions, l - z and unidentified. oversize photographs of td and his friends, relatives, and associates. arrangement arranged chronologically. - oversize photographs of vera dreiser and her family. arrangement arranged chronologically. - c.  art work. description these boxes contain prints, drawings, and caricatures, some of which are originals, some copies. original prints by wharton esherick, some inscribed to dreiser, are housed here, as is the original of the bookplate made for dreiser by franklin booth. the container list outlines specific holdings. box folder adams, wayman: reproductions of second painting of td, . amick, robert: sketches of td. aug. . davis, hubert: "the essence of irony" and "the griffith family in kansas city". dürer, albrecht: "the arraignment of jesus before pilot" and "the resurrection". esherick, wharton, - . contents * "map showing good old barnegat bay and the happy ports for great sloop `kitnkat'" (annotated by esherick re td's visit june ) * "free" ( ) * "the lee rail" ( ) * "of a great city" ( )(multiple copies, including ones ins cribed to td, louise campbell, and burton rascoe and metal plate used in printing) * "chick's ship" ( ) * illustration for  tristram and iseult ( ) * august ( ) * "the bid" ( ) (lithographs) * "as i watched the ploughman ploughing" by walt whitman ( ) (woodcuts by esherick) king, alexander: caricature of td and sherwood anderson, circa . description inscribed "theodore dreiser and sherwood anderson peeping at misery." kelly, james e. and john w. evans: drawings of thomas edison and oscar wilde by kelly, from engravings made by evans; letter from evans to td re wilde drawing. kolski, gan, - , undated. contents * "sunrise at provincetown" ( ) * "steam under bridge" ( ) * "after the storm" (undated)(lithographs) kubitz, estelle: cartoon drawing of td and estelle kubitz. lubbers, adrian: drawings, . contents * "brooklyn bridge" ( ) * "south ferry" ( ) * "times square from times building" ( ) miller, d.: marguerite tjader harris. reich, a.: prints,, , undated. contents * "amberg, martinskirche u. schiffersteg" ( ) * "auf der landstrasse" (n.d.) * "aus dem oberpfälzer jura" ( ) * "aus neustadt a./waldnaab" ( ) * "die ruine" ( ) * "schloss prunn im altmühltal" ( ) rivera, diego: details of murals, . rivera, diego: mural and detail from mural, . siporin: illustration for "kismet". stengel, hans: caricature of td with women, . duddy, lynn: vera dreiser. ?, elaine: vera dreiser. drawing of a house by?, spring . d.  promotional material. description & arrangement dreiser saved advertisements, programs, and other types of promotional material for his books, political causes, activities of his friends, and items that he wanted to buy. the promotional material for dreiser's books has been filed alphabetically by publisher; other promotional material has been ordered chronologically. box folder promotional material for td's books by b. w. dodge & co., boni & liveright (later horace liveright), and cin (czechoslovakian publisher). promotional material for td's books by constable & co. promotional material for td's books by doubleday & co., ediciones hoy (spanish publisher), golden book news, g. p. putnam's, heron press. promotional material for td's books by john lane co. promotional material for td's books by limited editions club, longman's modern age, népszava könyvkereskedés (hungarian publisher). promotional material for td's books by paul zsolnay verlag (german publisher), samuel french, world publishing co. promotional material for books of interest to or about td,, - . promotional material for various products and causes of interest to td. promotional material: programs, - . promotional material: programs, - . promotional material: programs, - and undated. e.  postcards. description & arrangement dreiser collected postcards during his travels in the united states, cuba, europe, turkey, and russia. most of them are unmarked, but some have annotations on the back by either theodore or helen dreiser. postcards of the united states are filed by s tate, and the others are filed by country of origin, with one exception. box contains the postcards that dreiser collected on his round trip from new york to indiana, the experiences from which were the basis of his book a hoosier holiday. he stored these postcards together as a group, as they remain in this collection. box folder postcards from "hoosier holiday" trip, arizona, new mexico, texas, georgia, florida. postcards from california, oregon, washington, yellowstone national park, montana, new jersey, pennsylvania, kentucky, maryland, virginia, west virginia, illinois, minnesota, north dakota, new york, miscellaneous united states, france, england. postcards from austria, czechoslovakia, denmark, scandinavia, germany, monaco, monte carlo, russia, switzerland. postcards from belgium, cuba, italy, the netherlands, turkey. f.  miscellaneous. description various small personal items belonging to theodore and helen dreiser are stored here, including their passports, flowers from dreiser's memorial service, and the newspaper clipping announcing helen's first marriage to frank richardson. the memorabilia are arranged chronologically, with theodore's first, followed by helen's. in addition, there is a - / lp recording of a interview with dreiser. box folder td memorabilia: td's passport, may . td memorabilia: souvenirs from trip to russia, - . td memorabilia: framed photograph of charles fort. td memorabilia: desk diary sent to td by john h. mackey, . td memorabilia: miscellaneous papers. td memorabilia: miscellaneous cards, including td-kirah markham "at home" card. td memorabilia: td signatures. helen dreiser memorabilia: newspaper account of double wedding of hazel patges (helen's sister) to david pettie and of helen patges to francis richardson; memorial booklet from funeral of hazel pettie, ?, . helen dreiser memorabilia: proposal to paint ida patges's (helen's mother's) house. helen dreiser memorabilia: promotional literature ("theodore dreiser: america's foremost novelist") given to helen by td on the day they met, sept. helen dreiser memorabilia: helen richardson's passport, june . helen dreiser memorabilia: bird feather from "hopsie," a one-legged bird. helen dreiser memorabilia: roses from the scarf covering td's casket, roses sent to helen on another occasion, jan. . helen dreiser memorabilia: helen's metropolitan museum of art (new york) lifetime membership certificate and card. helen dreiser memorabilia: program and tickets for premiàre of a place in the sun, aug. . helen dreiser memorabilia: cards re flowers sent to memorial service for helen dreiser, september. interview with td, feb. . return to top » xviii.  financial records. a.  authors royalties/ authors holding company. description this box contains statements of expenses for this company from october through october . there is also an account book covering the period june -december . box folder authors royalties/authors holding company statements, oct. - oct. . - authors royalties co., inc.: account book, june - dec. b.  book sales statistics and reprint rights. description housed here are sales statistics for all of dreiser's books from to and sales statistics for his books in the united states from to . also filed here are miscellaneous notes about reprint rights. box folder sales statistics on td's books, - . - sales statistics on td's books in the united states, - june. note see box , folder , for sales statistics for / / . reprint rights for td's writings, and undated. c.  receipts. description & arrangement bills sent to and receipts received by dreiser are filed alphabetically in this box. box folder receipts. - d.  taxes. description this box contains various state and federal tax forms for theodore dreiser for through , as well as , and for helen dreiser for through . bills, receipts, and lists of expenses and income accompany the forms for through . box folder td: u.s. individual income tax returns, - , . td: new york state income tax returns, - . authors royalties co., inc.: corporation income tax returns, - . brief for appellant: people of the state of new york, on relation of elmer l. rice v. mark graves et al. as tax commissioners (new york), court of appeals, . td and helen dreiser: u.s. and california individual income tax returns for ; u.s. estimated tax return for; statement of income and expenses, - . receipts, bills, and royalty statements used in preparing tax returns, . td (estate) and helen dreiser: u.s. and california individual income tax returns for ; u.s. partnership return; california fiduciary return; estimated income tax forms; statements of income, . receipts, bills, and royalty statements used in preparing tax returns, . - td (estate) and helen dreiser: u.s. and california individual, partnership, and fiduciary income tax returns; statements of income, . receipts, bills, and royalty statements used in preparing tax returns, . - td (estate) and helen dreiser: income and expenses, . e.  canceled checks. description the checks in this box were written by dreiser during - and - . box folder td canceled checks, . td canceled checks, . - td canceled checks, . td canceled checks, . return to top » xix.  clippings. description & arrangement dreiser and helen saved clippings themselves but also subscribed to clipping services and received clippings from friends and associates. the largest group of these in the dreiser papers has been organized into categories and microfilmed. the clippings in the four boxes in this series duplicate some of those in the larger microfilmed collection. two of the boxes contain miscellaneous clippings from to that mention some aspect of dreiser's life or work. another box contains clippings of reviews of dreiser's books or books about dreiser, arranged chronologically. included in this box are reviews of borden deal's book the tobacco men, which was based on dreiser's notes for his screenplay, "revolt or tobac co." the final box contains clippings of reviews of motion pictures based on dreiser's works:  the prince who was a thief, a place in the sun, and  carrie. box folder clippings about td, - . - clippings about td, - . - clippings: reviews of sister carrie, jennie gerhardt, the financier, a traveler at forty, the titan, the "genius," the hand of the potter. - clippings: reviews of the color of a great city, newspaper days (a book about myself), an american tragedy, moods. - clippings: reviews of a gallery of women, tragic america, dawn, america is worth saving, best short stories of theodore dreiser. - clippings: reviews of the bulwark. - clippings: reviews of theodore dreiser: apostle of nature, by robert h. elias, and  the letters of theodore dreiser, edited by robert h. elias. - clippings: reviews of theodore dreiser by f. o. matthiessen, and  my life with dreiser by helen dreiser. - clippings: reviews of dreiser by w. a. swanberg, and  letters to louise by louise campbell. - reviews of the tobacco men by borden deal , which was based on td's notes for his screenplay "revolt or tobacco", . - reviews or articles on the prince who was a thief, . - reviews or articles on carrie, . - reviews or articles on a place in the sun, . - - return to top » xx.  works by others. series description beginning during his career as a magazine editor and continuing throughout his lifetime, dreiser was a willing and helpful critic to writers who asked his advice about their work. this series consists of ( ) manuscripts, typescripts, printer's proofs, and printed versions of writings that these aspiring writers, as well as dreiser's friends and associates, sent him during his lifetime and ( ) writings about dreiser that the dreiser collection has received since his papers were deposited here. these w ritings are filed alphabetically, and researchers should check appendix g for specific authors and titles. box folder a - b. - c - d. - e - go. - gr - har. - harvey dudley, dorothy: galleys and book jacket for forgotten frontiers: dreiser and the land of the free, . haz - hu. - i - mcd. - mar - mo. - n - p. - powys, john cowper: bound page proofs for wolf solent, . r - s. - t - z and untitled. - cassette tape of lecture on td by fred c. harrison, and letter re lecture from harrison to myrtle butcher, nov. . videotape of "murder on big moose?" and note from trina carman, sept. . return to top » xxi.  oversize. description & arrangement the first box in this series contains oversize periodical publications, arranged chronologically. some were owned by dreiser; some contain works by him. the second box includes oversize items from several different series in the theodore dreiser papers and is arranged in series order. researchers should consult the container list for specific holdings. box folder russian magazine on the building of the moscow metro, . ussr in construction, nos. - , . l'illustration, dec. . "the tithe of the lord" : printed version in esquire, july. "the story of harry bridges" : printed version in friday, oct. . brandt & brandt correspondence, ? dec. butcher, myrtle patges correspondence: christmas card from td, helen richardson, and ida patges, . gredler correspondence: christmas card to td, undated. heinl, robert d. correspondence, : galleys for "bill," by paul dresser, . masters, edgar lee correspondence: galleys for "masters—on the mason county hills: butterfly hid in the room". paul zsolnay correspondence: foreign accounts, dec. map of automobile routes used by td on "hoosier holiday" trip to indiana, . issues of ottobre containing excerpts from  tragic america, . "concerning dives and lazarus": broadside, . "editor and publisher": broadside, . "humanitarianism in the scottsboro case": printed version in contempo, . "the pushcart man": printed version in new york call magazine, march . "the standard oil works at bayonne": printed version in new york call magazine, march . "toilers of the tenements": printed version in new york call magazine, aug. . "women can take it": reprint of "women are the realists" in new york journal-american, saturday home magazine, . "butcher rogaum's door" : printed version in reedy's mirror, dec. . "solution" : printed version in women's home companion, nov. map of td's property, mt. kisco, n.y. souvenir map of big moose lake, n.y. randolph bourne award, presented to td by american writers congress, june . sales statistics on td's books, march . lyon, harris merton: "the chorus girl". return to top » xxii.  clippings (originals for microfilm). description this series comprises clippings that theodore and helen dreiser collected, as well as those sent to them by their friends and by various clipping services that the dreisers used. these clippings are very fragile; some folders of clippings have disappeared, and many clippings are unreadable in their current condition. the entire clipping collection was microfilmed, and the microfilm is available to readers. box folder biographical: miscellaneous personal items. - biographical: newspaper photographs; caricatures; td trip to europe, - ; td trip to europe, - ; td trip to russia, - ; td tour of u.s., ; coal mine strikes, - , . - biographical: death notices, ; helen dreiser activities, - ; early periodical stories; interviews with td. - biographical: miscellaneous opinions; forewords, introductions; poems. literary criticism: in newspapers and periodicals; reviews and notices of books on td by burton rascoe, vrest orton, dorothy dudley, robert elias, f. o. matthiessen, helen dreiser. - literary criticism: general literary comment. - literary criticism: general literary comment (cont.). - literary criticism: poems; sister carrie; "the mighty burke;"  jennie gerhardt; "the men in the dark". - literary criticism: the financier; a traveler at forty; "an episode;" "the first voyage over;" "an uncommercial traveler in london;"  the girl in the coffin; "paris;" "impressions of the old world". - literary criticism: the titan; the "genius". - literary criticism: the "genius" (cont.);  the blue sphere; in the dark; laughing gas; plays of the natural and the supernatural; the rag pickers; "epic of desire;"  the light in the window; "the lost phoebe;"  the bulwark. - literary criticism: the bulwark (cont.);  a hoosier holiday. - literary criticism: "life, art and america;" "married;" "change;" free and other stories; "the right to kill;" "the country doctor;"  the hand of the potter; twelve men; "the pushcar t man;" "love;" "ashtoreth;"  hey rub-a-dub-dub; "more democracy or less;"  a book about myself; "indiana;"  the color of a great city. - literary criticism: an american tragedy. - literary criticism: an american tragedy (cont.);  chains; "mildred my mildred;"  moods; a gallery of women; dreiser looks at russia; "this madness;" "epitaph;"  dawn; newspaper days; tragic america. - literary criticism: the stoic; "winterton;"  moods; the edwards case;  the living thoughts of thoreau; america is worth saving; world publishing co. reprints;  best short stories of theodore dreiser; "st. columba and the river;" "the prince of thieves." items of special interest to td: source material. - items of special interest to td: source material (cont.). - items of special interest to td: "on the banks of the wabash;" john cowper powys lecture on td; h. l. mencken; edgar lee masters; windy mcpherson's son, by sherwood anderson;  contemporary portraits by frank harris;  american literature of the present by herman g. scheffauer;  my gal sal. foreign language and influence: foreign influence; british. - foreign language and influence: british (cont.); czechoslovakian; danish; dutch; french; german. - foreign language and influence: philippine; italian; mexican; russian; spanish; swedish; yiddish. sheri scott folder. - return to top » appendices. appendix a: location list of essays by theodore dreiser title (folders) "an address to caliban" ( - ) "ah! robert taylor" ( ) "all life is sacred. oh yes" ( ) "america" ( - ) "america: a chain of phylacteries" ( ) "america and the artist" ( ) "america—and war" ( ) "american democracy against fascism" ( ) "american restlessness" ( ) "american tragedies" ( ) "american tragedies" [book review] ( ) "america's foremost author protests against suppression of great books and art by self-constituted moral censors" ( ) "america's only genius—boosting" ( ) "and the greatest of these" ( ) "appearance and reality" ( ) "arbeitslose in new york" ( ) "are the masses worth saving" ( ) "armenia today" ( ) "the artistic temperament" ( ) "as if in old toledo" ( ) "ashtoreth" (see box , folders - ) "baa! baa! black sheep" series: "johnny" ( - ) "baa! baa! black sheep" series: "otie" ( - ) "baa! baa! black sheep" series: "bill brown" [by hazel godwin] ( - ) "baa! baa! black sheep" series: "ethelda" ( - ) "baa! baa! black sheep" series: "clarence" ( - ) "baa! baa! black sheep" series: "harrison barr" ( - ) "baa! baa! black sheep" series: "arthur baker" [not used] ( ) "baa! baa! black sheep" series: "artie and jean" [not used] ( ) "baa! baa! black sheep" series: "christine marsten" [not used] ( ) "baa! baa! black sheep" series: "george" [not used] ( ) "baa! baa! black sheep" series: "jimmy and the pituitary gland" [by marcia lee masters?; not used] ( ) "baa! baa! black sheep" series: "louisa" [not used] ( ) "baa! baa! black sheep" series: "the meanest man" [by marcia lee masters; not used] ( ) "baa! baa! black sheep" series: "orville signs the checks" [not used] ( - ) "baa! baa! black sheep" series: "our way of life" [not used] ( ) "baa! baa! black sheep" series: "this is ida" [not used] ( ) "baa! baa! black sheep" series: "uncle jeffry" [not used] ( ) "the balance for right" ( ) "the beauty of the tree" ( ) "berlin" ( ) "the best motion picture interview ever written" (see "mack sennett") [comment on] books in brief ( ) "the bread line" (see box , folders - ; box , folder ; box , folder ) "brown fell dead" ( ) "california committee against initiative proposition no. " ( ) "a call for a true relationship" ( ) "challenge to the creative man" ( - ) "change" (see box , folders - ) "chaos" ( - ) "charles fort" ( ) "chauncey m. depew" ( - ) "a certain oil refinery" (see "the standard oil works at bayonne") [chicago] ( ) "chile as a prey to american imperialism" ( ) [china] ( ) "christmas in the tenements" (see box , folders - ; box , folder ; box , folder ) [the church and wealth in america] ( ) "citizens of moscow" ( ) [see also box , folders - , ] "civilization where? what?" ( - ) "the cliff dwellers" ( - ) "cold spring harbor" ( ) "the color of to-day" ( ) [see also "sonntag—a record," box , folders - ] "come all ye who are weary and heavy laden" ( ) "comment on experimental cinema" ( ) "commercial exploitation in america" ( - ) [communist party] ( ) "concerning dives and lazarus" (see box , folder ) "concerning our helping england again" ( ) "concerning the elemental" ( ) "concerning the joy of living and doing" ( ) "concerning religious charities" ( ) "a confession of faith" ( ) "the control of sex" ( ) "a conversation" [between td and john dos passos] ( ) [comment on] "co-op," by upton sinclair ( ) "the country doctor" ( ) [see also box , folders - ] "the cradle of tears" (see box , folder ; box , folder ; box , folder ) "credo" ( ) [review of] crime and punishment, by f. dostoievsky ( ) "crime and punishment here" ( ) "a cripple whose energy gives inspiration" (see "the noank boy") "the crowding of the cities" ( - ) "curious shifts of the poor" (see "the old captain") "daily news ears batted down by dreiser" ( ) "the dawn is in the east" ( - ) "the day of surfeit" ( ) "the democracy of the funny bone" ( ) "the descent of the horse" ( ) "a doer of the word" ( ) "down hill and up: part i—down" ( - ) "down hill and up: part ii—up" ( - ) "the dream" (see box , folder ) "dreiser defends norris on power" (see "reply to mr. paul s. clapp") "dreiser describes spain's tense air" ( ) "dreiser discusses dewey plan" ( ) "dreiser finds morale of barcelonians high" ( ) "dreiser on scottsboro" (see "public opinion and the negro") "dreiser sees no progress" ( ) "earl browder—july , " ( ) "earl browder—terre haute" ( ) [the early adventures of "sister carrie"] ( ) "editor and publisher" (see box , folder ) "editorial conference" ( ) "edmund clarence stedman at home" ( ) "education and civilization" ( ) "electricity in the household" ( ) [emergency unemployment relief committee] ( ) "the epic sinclair" ( - ) "epic technologists must plan" ( ) "the factory ( - ) "fall river" ( ) "fifty million frenchmen" ( ) "flies and locusts" ( ) "the flight of pigeons" (see box , folder ; box , folder ; box , folder ) "fools of love" ( ) "the fools of love and the fools of success" ( - ) "'free the class war prisoners in boss jails'—dreiser" ( ) "freedom for the honest writer" ( - ) "fruit growing in america" ( - ) [review of] gandbi: the magic man ( ) "a garbled report" ( ) [the genesis of the peach crop] ( ) [george ade] ( ) [german temperament] ( ) "the god forgotten" ( - ) "good and evil" ( ) "the gordian knot" ( - ) "the great american novel" ( ) [comment on] the great hunger, by johan bojer ( ) "great problems of organization. iii. the chicago packing industry" ( ) "greenwich village" ( ) "greetings to the canadian workers in their struggle for freedom" ( ) "the harp" ( ) "the haunts of bayard taylor" ( ) "helen" ( ) "henry l. mencken and myself" ( ) "hey, rub-a-dub-dub" ( ) [see also box , folders - ] "heywood broun" ( ) "the hidden god" ( ) "hitler, fascism and the jews" ( ) [hitler's invasion of russia, ] ( ) "hollywood: its morals and manners" [parts - ] ( - ) "hollywood now" ( - ) "the holy roman church" ( ) "hoover and the red cross: russia - " ( ) "how russia handles the sex question" ( ) "how the great corporations rule the united states" ( ) "humanitarianism in the scottsboro case" (see box , folder ) "hungary and the hungarians" ( ) "i am grateful to soviet russia" ( ) "i find the real american tragedy" ( - ) [i find the real american tragedy] [testimony of robert allan edwards on cross-examination from trial] ( - ) "i hope the war will blow our minds clear of the miasma of puritanism" (see "what the war should do for american literature") "i remember! i remember!" series: contributions by td, louise campbell, marcia masters, mary donovan, dagmar deering, lulla adler, and yvette szekely ( - ) "ida hauchawout" ( - ) [see also box , folders - ; box , folders - ] "if man is free, so is all matter" ( ) "illinois" ( - ) "in mizzouri" ( ) "incentive—a problem essay" ( ) "indiana" ( - ) "intellectual unemployment" ( ) "interdependence" ( ) "interview between theodore dreiser and harry bridges" ( - ) [see also box , folder ] "an interview with ty cobb" ( - ) "the irish section foreman who taught me how to live" ( ) "is american freedom of the press to end?" ( ) "is fascism coming to america?" ( ) "is there a future for american letters?" ( - ) "it is official lawlessness in america that makes government regulation or aid in any quarter wholly futile" ( ) "it is parallels that are deadly" ( - ) [see also "the coward" in td writings: short stories] "j. q. a. ward" ( - ) "john reed club answer" ( ) "judge jones, the harlan miners and myself" ( ) [comment on] judgment day, by elmer rice ( ) "just how our corporations work and rule" ( ) "keep moving [or starve]" ( - ) [kentucky coal miners and situation in harlan county] ( ) "kismet" ( - ) "the laziest man. a case of real idleness" ( ) "a lesson from the aquarium" ( - ) "lessons i learned from an old man" ( ) "let the dead bury the dead" ( ) "let us look honestly at the cause of sex crimes" ( ) "a letter about stephen crane" ( ) "a letter from rex beach & the authors' league of america to t. dreiser and an answer" ( ) [letter to editor re td's reply of sept. to writers war board, oct. ] ( ) "letter to governor young" [re tom mooney] ( ) [letter to new york world telegram in reply to td re american federation of labor] ( ) [letter to the president and congress of the united states states re the communist party] ( ) "letters and opinions on the land of the soviets" ( ) "libel à la mode" ( ) "'liberty': what price?" ( ) "life after death" ( ) "life, art and america" (see box , folder ) "life at sixty-seven" ( - ) "literary immorality" ( ) "literature and journalism" ( ) "the log of an ocean pilot" ( ) [see also box , folder ; box , folder ; box , folder ] "the loneliness of the city" ( ) "the love affairs of little italy" (see box , folder ; box , folder ; box , folder ) "loyalists tell dreiser they will not surrender" ( ) "mack sennett" ( - ) "the making of small arms" ( ) "the making of stained-glass windows" ( ) "man and romance" ( ) "the man on the bench" (see box , f. - ; box , f. ; box , f. ) "the man on the sidewalk" ( - ) "the man who bakes your bread" ( - ) "the man who wanted to be a poet" ( ) "manhattan beach" ( ) "the mansions of the father" ( - ) [marden, orison swett, and success magazine] ( ) "mark the double twain" ( - ) "mark twain—three contacts" ( - ) [massie crime in hawaii] ( ) "mathewson" ( - ) "the matter of labor's share" ( ) "meaning of the ussr in the world today" ( - ) "the men in the dark" ( ) [see also box , folders - ; box , folder ; box , folder ] "the men in the snow" (see box , folder ; box , folder ; box , folder ) "the men in the storm" (see box , folder ; box , folder ) "the mighty burke" ( ) "miss fielding" ( ) "a modern advance in the novel" ( ) "mooney and america" ( ) [essay on tom mooney] ( ) "more democracy or less? an inquiry" (see box , folders - ) "the most successful ballplayer of them all" (see "an interview with ty cobb") "my city" ( - ) [see also box ] "my creator" ( - ) "my favorite fiction character" ( ) "myself and the movies" ( - "the myth of individuality" ( ) "the new and the old" ( ) "the new day" ( ) "the new humanism" ( ) [ new masses ] ( ) [new york] ( ) "new york" ( ) "nigger jeff" ( ) "nikolai lenin" ( ) "no advice to young writers" ( ) "no cars running" ( ) [review of] no for an answer, by marc blitzstein ( ) "the noank boy" ( ) "the noise of the strenuous" ( ) [review of] of human bondage ( ) "the old captain" ( ) "an old spanish custom" ( ) "olive brand" ( ) [see also box , folders - ] "on doctors" and "on physicians" ( ) "on—myself" ( ) "one day" ( ) [review of] one man, by robert steele ( ) "our amazing illusioned press" (see "what is the matter with the american newspaper") "our american press and our political prisoners" ( ) "our creator" ( ) "our democracy: will it endure?" (see box , folders , ) "our greatest writer tells what's wrong with our newspapers" ( ) "our red slayer" (see box , folders - ; box , folder ; box , folder ) "out of my newspaper days. i. chicago" ( ) [see also box , folder ] "out of my newspaper days. ii. st. louis" ( ) [see also box , folders - ] "out of my newspaper days. iii. 'red' galvin" ( ) [see also box , folders - ] "out of my newspaper days. iv. the bandit" ( ) [see also box , folder ] "out of my newspaper days. v. i quit the game" ( ) [see also box , folders - ] "an overcrowded entryway" (see "hollywood: its morals and manners," box , folder ) "overland [journey]" ( - ) "paris— " ( ) "policy of national committee for defense of political prisoners" ( ) "portrait of a woman" ( ) [see also "ernestine" in box , folders - ] "portrait of an artist" ( ) "the position of labor" ( ) [present revolt of the arts in america] ( ) "the problem of distribution" ( ) "the professional intellectual and his present place" ( ) "the profit-makers are thieves" ( ) "prosperity for only one percent of the people" ( ) "public opinion and the negro" ( ) "the pushcart man" (see box , folder ) [see also box , folders - ; box , folder ; box , folder ] "pushkin" ( ) "rally round the flag" ( - ) "the real sins of hollywood" ( ) "the realistic parade" ( ) "rebellious women and marriage" ( - ) "the red cross brings poverty and misery" ( ) "regina c—" ( ) [see also box , folder ; box , folders - "reina." see also box , folders - ( ) "rella." see also box , folders - ( ) "reply to mr. paul s. clapp" ( ) "the right to revolution" ( ) "the rivers of the nameless dead" (see box , folders - ; box , folder ; box , folder ) "robison cars running" ( ) "the romance of power" ( - ) "running the railroads" ( - ) [see also "a splash of cold water on the railroads"] "rural america in wartime" ( - ) "russia: the great experiment" (see box , folder ) "the russian advance" ( ) "russian vignettes" ( ) [see also box , folders , ] "the saddest story" [review of the good soldier, by ford madox hueffner (ford)] ( ) "samuel butler" ( ) "sarah schanab" ( ) "scenes in a cartridge factory" ( ) "the scope of fiction" ( ) "a sea marsh" ( ) "the seventh commandment" ( - ) "sex crimes and morals" ( - ) "sherwood anderson" ( - ) "should capitalistic united states treat latin america imperialistically?" ( - ) "should communism be outlawed in america" ( ) "should the government compete in business with private individuals?" ( ) "should hungary have been crunched under heel?" ( - ) "the silent worker" ( ) "six o'clock" ( ) [see also box , folder ; box , folder ; box , folder ] "the six worst pictures of the year" ( ) [sombre annals], review of undertow, by henry k. marks ( ) [soviet union] ( ) "speaking of censorship" ( ) "the spider and the fly" ( ) "a splash of cold water on the railroads" ( ) [see also "running the railroads"] "stamp out want" ( ) "a stand in life" ( ) "the standard oil works at bayonne" (see box , folder ) [see also box , folder ; box , folder ; (box , folder >] [a start in life] ( - ) "a statement by theodore dreiser" (see "comment on experimental cinema") [sterling, george] ( ) "the story of harry bridges" (see "interview between theodore dreiser and harry bridges") [see also box , folder ] "the story of the states: no. iii—illinois" (see "illinois") "the strike to-day" ( ) "strikers arrested" ( ) "a suggestion for the communist party" ( ) "the superstition of my birth" ( ) "symposium on the medical profession" (see "on doctors") "take a look at our railroads" (see "running the railroads" and "a splash of cold water on the railroads") "temperaments—artistic and otherwise" ( ) "theodore dreiser and the free press" ( ) "theodore dreiser condemns war" (see "war") "theodore dreiser's interview of anna fort" ( - ) "theodore dreiser picks the six worst pictures of the year" (see "the six worst pictures ‥" "they shall not die" ( ) "this florida scene" ( ) "this madness" series: "introduction" ( - ) "this madness" series: "aglaia" ( - ) "this madness" series: "elizabeth" ( - ) [see also "a daughter of the puritans," box ; box , folders - ] "this madness" series: "sidonie" ( - ) "this madness" series: "camilla" [not used] ( - ) "this madness" series: "aglaia" [printed version] ( - ) "this madness" series: "the story of elizabeth" [printed version] ( - ) "this madness" series: "the book of sidonie" [printed version] ( - ) [thompson family] ( ) "the threat of war and the youth" ( ) [time capsule, td's message for] ( ) "the tippicanoe" ( ) "the titan in england" ( ) "to be or not to be" ( ) "to those whom it should concern" ( ) "the toil of the laborer: a trilogy" [see also box , folder - ] ( ) "toilers of the tenement" (see box , folder ) [see also box , folders - ; box , folder ; box , folder ] [toilers of the tenement: untitled article similar to the one with this title] ( ) "the training of the senses" ( ) "the treasure house of natural history" ( ) "the trial of the negro communists" ( ) [tribute to gorky] ( ) [unemployment and the wpa] ( ) "unemployment in america" ( - ) "unemployment in new york" ( - ) "u[nited].s[tates]. must not be bled for imperial britain"( ) "upton sinclair" ( ) "war" ( - ) [war: td's denunciation of, s] ( ) "war is a racket" ( ) "war is a racket" ( ) "war or no war" ( ) "the waterfront" (see box , folder ; box , folder ) "we hold these truths...," ( ) "what are america's powerful motion picture companies doing?" ( ) "what has the great war taught me?" ( ) "what i believe: living philosophies--iii" ( ) [see also "credo"] "what is americanism?" ( ) "what is democracy?" ( ) [see also box , folder ; box , folders , ] "what is the matter with the american newspaper" ( - ) [see also "our greatest writer tells what's wrong with our newspapers"] "what my mother meant to me" ( ) "what the war should do for american literature" ( ) "what to do" ( ) "when the sails are furled: sailor's snug harbor" ( ) [see also box , folder ; box , folder ] "when will the next war start?" ( ) "whence the song" (see box , folder ; box , folder ) "where is labor's share?" ( ) "where is leadership for the workingman?" ( ) "white magic" ( - ) "whom god hath joined together" ( - ) "why help russia?" ( ) "why i believe the daily worker should live?" ( ) "why i like the russian people" ( ) "why i propose to vote for the communist ticket" ( ) "why physical morality?" ( ) "will fascism come to america?" (see "is fascism coming to america?") "winterton" ( ) "women are the realists" ( - ) [see box , folder for reprint] "woods hole and the marine biological laboratory" ( ) "a word concerning birth control" ( ) "work of mrs. kenyon cox" ( ) "work of vengeance" ( ) "writers declare: 'we have a war to win'" ( ) "writers take sides" ( ) "the yield of the rivers" ( ) "you, the phantom" ( - ) untitled essays ( - ) appendix b: location list of short stories by theodore dreiser title (folders) "ambling sam" ( ) "art for art's sake" ( ) "as the hart panteth after the roe" ( ) "the bargainers—mrs. p.a.s romance" ( ) "beauty" ( ) "bleeding hearts" ( ) "the building of new york's first apartment hotel" ( ) ["the door of the] butcher rogaum" ( ) [see also box , folder ] "chains" [story plus proposed table of contents for book of short stories using this title] ( ) "choosing" ( ) [see also newspaper days : ms, chaps. xxv-xxx] ["the power of] convention" ( - ) "the coward" ( ) [see also "it is parallels that are deadly" in td writings: essays] "the credo (i believe)" ( ) "the crime" ( ) "the cruise of the idlewild" ( - ) "cut out" ( ) "de lusco" ( - ) "the empty nest" ( ) "enchantment" ( ) "the end of the day" ( ) "the ex governor" ( ) "the failure" ( ) "the failure—the other one" ( ) "the fairy" ( ) "father" ( ) "the father" ( ) "the favor" ( ) "fine feathers" ( ) "fine feathers" ( ) "fine furniture" ( - ) "fulfillment" ( - ) "the fur merchant" ( ) "the gentler sex" ( ) "a girl" ( ) "gold teeth" ( - ) "the gulls" ( ) "the hand" ( - ) "the happy marriage" ( ) "the hedonist" ( ) "the heir" ( ) "her boy" ( - ) "her problem" ( ) "the hermit" ( ) "his sister" ( ) "the homely woman" ( ) "how she won—the girl who woke up" ( ) "in memory" ( ) "irrepressible edward" ( ) "is life worth living" ( ) "it shall not be" ( ) "jealousy" ( ) [see also "the shadow"] "khat" ( ) "the king of shadows" ( ) "kismet" ( ) "the last sip" ( ) "let the dead bury the dead" ( ) "the lost father" ( ) "the lost phoebe" ( - ) "the man who wanted to be a poet" ( ) "marriage—for one" ( ) "the mercy of god" ( - ) "mrs. george sweeny" ( ) "mr. grillsnider" ( - ) "mobgallia" ( ) "nemesis" ( ) ["the lynching of] nigger jeff" ( - ) "no sale" ( ) "the old neighborhood" ( - ) "old rogaum and his theresa" (see ["the door of the] butcher rogaum") [olga and her "true" love] ( ) "oolah, boolah, boolah!" ( ) "paternity" ( ) "phantom gold" ( ) "the prince who was a thief" ( - ) "pure chemistry" ( ) "the reigning success" ( ) "revenge" ( - ) "the reward" ( ) "the rivals" ( ) "the road to happiness" ( ) "the sailor who would not sail" ( - ) "sanctuary" ( ) "the second choice" ( - ) "the second motive" ( ) "a sentimental journey" ( ) "the shadow" ( ) [see also "jealousy"] "shadows" ( ) "so nice of you" ( ) "solution" ( - ) [see also box , folder , and "solution" in td writings: screenplays and radio scripts ] "a story of stories" ( - ) "the strangers" ( ) "surcease" ( ) "sympathy in grey" ( ) "tabloid tragedy" ( ) "that which i feared" ( ) "three hundred dollars" ( ) "the tithe of the lord" ( - [see also box , folder ] "the total stranger" ( - ) "transubstantiation" ( ) "two hundred dollars" ( ) "typhoon" ( - ) "the virtues of abner nail" ( ) "the voice from heaven" ( ) "the wages of sin" (see "typhoon") "what's right" ( ) "when the old century was new" ( - ) "willard and claire" ( ) "the writer" ( ) untitled story manuscripts ( - ) untitled story of an unfaithful wife ( ) untitled story outline ( ) untitled story outline [related to "revenge"?] ( ) untitled story typescript ( ) appendix c: location list of poems by theodore dreiser title (folders) "an address to the sun" ( ) "all" ( ) "all in all" ( ) "all thought—all sorrow" ( ) "allegory" ( ) "ambition" ( ) "amid the ruins of my dreams" ( ) "and continueth not" ( ) "arizona" ( ) "as a lone horseman, waiting" ( ) "as with a finger in water" ( ) "the ascent" ( ) "asia" ( ) "the aspirant" ( ) "avatar" ( ) "the `bad' house" ( ) "the balance" ( ) "bayonne" ( ) "the beauty" ( ) "before the accusing faces of billions" ( ) "bells" ( ) "beyond the tracks" ( ) "the blurred of vision" ( ) "boom—boom—boom" ( ) "borealis" ( ) "brahma" ( ) "the brief moment" ( ) "the broken ship" ( ) "the brook" ( ) "brooklyn bridge" ( ) "by the waterside" ( ) "cattails—november" ( ) "the cattle train" ( ) "chief strong bow speaks" ( ) "the city" ( ) "city's accidents" ( ) "the city's night" ( ) "the coal shute" ( ) "commune" ( ) "conclusion" ( ) "confession" ["i!"] ( ) "confession" ["love has done this for me:"] ( ) "contest" ( ) "crowds" ( ) "crows" ( ) "the dancers" ( ) "the dark hazard" ( ) "darkling desires" ( "dawn" ( ) "the deathless princess" (see "i am repaid") "decadence" ( ) "defeat" ( ) "demogorgon" ( ) "demons" ( ) "desire—ecstasy" ( ) "die sensucht" ( ) "dives advises" ( ) "divine fire" ( ) "dreams" ["always within the heart,"] ( ) "dreams" ["transitory dreams"] ( ) "driven" ( ) "elegy" ( ) "epitaph" ( ) "epitaph" [scored for music by walter grondstay] ( ) "equation" (see "exchange") "escape" ( ) "etching" (see "pastel" ["the hills flow like waves"]) "eunuch" ( ) "the evanescent moment" (see "the brief moment") "evening—mountains" ( ) "evensong" ( ) "everything" ( ) "the evil treasure" ( ) "exchange" ( ) "the excuse" ["it has been my lacks"] ( ) "the excuse" ["those things"] ( ) "eyes" ( ) "the factory" ( ) "factory walls" ( ) "the failure" ["always a man will take color from his work"] ( ) "the failure" ["the unconscious that drove me"] ( ) "fata morgana" ( ) "the favorite" ( ) "the fire of hell" ( ) "five moods in minor key" [includes "tribute," "the loafer," "improvisation," "machine," and "escape"] ( ) "five poems by theodore dreiser" [includes "tall towers," "the poet," "in a country graveyard," "the hidden god," and "the new day"] ( ) "flower and rain" ( ) "the fomentor" ( ) "the fool" ( ) "for a moment the wind died" ( ) "for a moment the wind died" [scored for music by lillian rosedale goodman] ( "for i have made me a garden" ( ) "the forest" ( ) "foreword" ( ) "four poems" [includes "wood note, "for a moment the wind died," "they shall fall as stripped garments," and "ye ages, ye tribes!"] ( ) " th street" ( ) "freedom" ( ) "frustrated desire" ( ) "fugue" ( ) "the funeral" ( ) "the furred and feathery" ( ) "the galley slave" ( ) "the garden" ( ) "geddo street" ( ) "the ghetto" ( ) "the gift" ( ) "the gifted company" ( ) "the gladiator" ( ) "gold" ( ) "good fortune" ( ) "the granted dream" ( ) "grant's tomb" ( ) "the great face" ( ) "the great lack" ( ) "the great silence" ( ) "the great voice" ( ) "the greater need" ( ) "harbor—evening" ( ) "heaven" ( ) "heights" ( ) "hell gate" ( ) "hey rube!" ( ) "the hidden poet" ( ) "his mother" ( ) "home" ( ) "the home maker" ( ) "honest katie" ( ) "the house of dreams" ( ) "the hudson" ( ) "the hudson—morning" ( ) "the hudson—west shore—evening" ( ) "the husbandman" ( ) "i am repaid" ( ) "if beauty would but dwell with me" ( ) "the image of our dreams" ( ) "improvisation" ( ) "in a negro graveyard" ( ) "in rebuttal" ( ) [in the park] ( ) "in this park" ( ) "in the seaside auditorium" ( ) "individuality" ( ) "innocence" ( ) "inquiry" ( ) "interrogation" ( ) "intruders" ( ) "it" ( ) ["it is with these living"] ( ) "ita est" ( ) "job and you" ( ) "kansas and nebraska" ( ) "karma" ( ) "the kiln" ( ) "laborer—mexico" ( ) "the lack" ( ) "the last voice" ( ) "let me know more of thee" ( ) "liberty" ( ) "life"— versions: ( ) ["ever a greater illusion"] and ( ) ["it is so beautiful"], scored for music by lillian rosedale goodman ( ) "light and shadow" ( ) "lillies and roses" ( ) "links" ( ) "little dreams, little wishes" ( ) "the little flower of love and wonder" ( ) "the little home" ( ) "little keys" ( ) "little moonlight things of song" ( ) "the little shops" ( ) "the loafers" ( ) "love" ["i am but a spoonful of honey"] ( ) "love" ["i stood in the rain"] ( ) "love" ["like a cactus in a desert"] ( ) "the love-death" ( ) "love song" ["to me"] ( ) "love song" ["to me"] [scored for music by hermann erdlen; german libretto for baritone and string quartet by lina goldschmidt] ( ) "love song" ["you have entered my dreams!"] ( ) "the lovers" ["today!"] ( ) "the lovers" ["two resplendent flames"] ( ) "machine" ( ) "machines" (see "summer") "man" ( ) "the march" ( ) "marriage" ( ) "marsh bubbles" ( ) "the martyr" ( - ) "the masque" ( ) "material' possessions" ( ) "the meadows" ( ) "a mean street" ( ) "melody" ( ) "the merging" ( ) "messenger" ( ) "the miracle" ( ) "mirage" ( ) "miserere" ( ) "mood music" ( ) "moon moth" ( ) "morning—east river" ( ) "morning in the woods" ( ) "morning—north river ." ( ) "morning—north river ." ( ) "morning—the whistle" ( ) "mortuarium" ( ) "mothers" ( ) "the mourner" ( ) "the muffled oar" ( - ) "the multitude" ( ) "the mysterious master" ( ) "mystery" ( ) "the myth of possessions" ( - ) "nature" ( ) "the nestlings" ( ) "the new day" ( ) "new faces for old" ( ) "the new world" ( ) "newark bay" ( ) "nocturne—north river" ( ) "not forgotten" ( ) "nothing" ( ) "obliteration" ( ) "october" ( ) "oh urgent seeking soul" ( ) "the old south" ( ) "the one and only" (see "die sensucht") "orchestra" ( ) "the orient" ( ) "out of? in?" ( ) "outcast" ( ) "passion" ( ) "pastel" ["a grey day—"] ( ) "pastel" ["the hills flow like waves"] ( ) "pastel: twilight" ( ) "the perfect room" ( ) "the pervert" ( ) "phantasm" ( ) "phantasmagoria" ( ) "pierrot" ( ) "pigeons" ( ) "polarity" ( ) "the possible" ( ) "the prisoner" ( ) "the process" ( ) "proclamation" ( ) "the prophet" ( ) "proteus" ( ) [see also "the fomentor"] "the psychic wound" ( ) "question" ( ) "the question" ["more life for more people—"] ( ) "the question" ["no gratitude?"] ( ) "the questioner" ( ) "rain" ( ) "rain—november" ( ) "' reality, '" ( ) "recent poems of life and labour" [includes "the factory," "the stream," and "geddo street"] ( ) "the reformer speaks" ( ) "regret" ( ) "religion" ( ) "requiem" ( ) "requiem" [scored for music by vera dreiser] ( ) "resignation" ( ) "revenge" ( ) "revery" ( ) "revolt" ( ) "reward" ( ) "the riddle" ( ) "the river dirge" ( ) "river scene" ( ) "the sailor" ( ) "st. francis to his god" ( ) "st. george's ferry" ( ) "st. john" ( ) "st. lukes" ( ) "sanctuary" ( ) "the savage" ( ) "schimpfen sie" ( ) "search song" ( ) "selah" ( ) "the self-liberator" ( ) "seraphim" ( ) "shadow" ( ) "the shadow" ( ) "shimtu" ( ) "siderial" ( ) "the singer" ( ) "something is thinking" ( ) "song" ["blow winds of summer, blow"] ( ) "song" ["old woman"] ( ) "song—rain" ( ) "the sons of prometheus" ( ) "soo-ey" ( ) "the sower" ( ) "the sowing" ( ) "static" ( ) "the storm" ( ) "the stranger" ( ) "the stylist" ( ) "summer" ( ) "a summer evening" ( ) "sun and flowers and rats" ( ) ["sunday again the city will sleep late"] ( ) "sunset" ( ) "sunset and dawn" ( ) "supplication" ( ) "sutra" ( ) "take hands" [scored for music by carl e. gehring] ( ) "tenantless" ( ) "that accursed symbol" ( ) "they have conferred with me in solemn counsel" ( ) ["the things of death are bitter and complete"] ( ) "the thinker" ["majestic"] ( ) "the thinker" ["out of boost pegram's poolroom"] ( ) "thought" ( ) "thoughts" ( ) "through all adversity" ( ) "tigress and zebra" ( ) "time" ( ) [see also "the new world"] "the time-keeper" ( ) "times square (midnight)" ( ) "tis thus you torture me" ( ) "to a windflower" ( ) "to a wood dove" [scored for music by lillian rosedale goodman] ( ) "to make him know" ( ) "to oscar wilde" ( ) "to you" ( ) "the torrent" ( ) "the tower" ( ) "the toymaker" ( ) "the traveler" ( ) "trees" ( ) "tribute" ( ) "the triumph" ( ) "the troubadour" ( ) "tryst" ( ) "two by two ( ) "the ultimate" ( ) "the ultimate necessity" ( ) "the unterrified" (see "love" ["like a cactus in a desert"]) "us" ( ) "the victor" ( ) "the vigil" ( ) "the voyage" ( ) "walls" ( ) "the wanderer" ( ) "the watch" ( ) "the waterside" ( ) "what" ( ) "what to do" ( ) "who lurks in the shadow?" ( ) "winter" ( ) "with whom is shadow of turning" ( ) "wood tryst" ( ) "words" ( ) "wounded by beauty" ( ) "the wraith" ( ) "you are the silence" ( ) "the young girl" ( ) "young love" ( ) "youth" ( ) appendix d: location list of plays by theodore dreiser title (folders) "the bargainers—a modern drama" ( - ) "the bell" ( ) "the best people" ( ) "the blue sphere" ( - ) ["the blue sphere"] "die blaue kugel" [scored for music by hermann erdlen; translation by lina goldschmidt and hans bodenstedt] ( - ) "the choice" ( - ) [see also "the choice" in td writings: screenplays and radio scripts.] "the dream" ( - ) "the end: a reading play in scenes" ( ) "fidelity" ( ) "the fool: a tragedy" ( ) "the girl in the coffin" ( - ) "gorm: a tragedy" ( ) "the hand of the potter" ( - ) "the herald" ( ) "in the dark" ( - ) "jeremiah i" ( ) "laughing gas" ( - ) "laughing gas" [scored for music by ivan boutnikoff] ( ) "the legacy" ( ) "the light in the window" ( - ) ["the light in the window"] "das licht im fenster" [german translation by lina goldschmidt] ( ) mildred—my mildred" ( - ) "the neer-do-well" ( ) "old rag picker" ( ) "phantasmagoria" ( ) "the spring recital" ( ) "the spring recital" (ballet-pantomime) [music by ivan boutnikoff] ( ) "town and country" ( ) "the voice" ( ) fragments and outlines ( - ) appendix e: location list of screenplays and radio scripts by theodore dreiser title (folders) memorandum re possible movie material in td's work ( ) list of movie scenarios by td or of td's works ( ) "arda cavanaugh" [screen adaptation by elizabeth coakley] ( ) [see also "cinderella the second" "big town: death weather" [radio adaptation by marian spitzer and milton merlin] ( - ) "box office" [screen adaptation by elizabeth coakley] ( - ) "chaduji" ( - ) "the choice" ( - ) [see also "the choice" in td writings: plays ] "cinderella the second" [screen adaptation by elizabeth coakley] ( - ) [see also "arda cavanaugh"] "the clod" ( ) "culhane, the solid man" ( - ) "the door of the trap" ( - ) "hadassah or ishtar or esther" ( ) "the hand" ( - ) "helen of troy" ( ) "home is the sailor" [outline for movie script by esther mccoy] ( - ) "lady bountiful, jr." ( - ) "the long long trail" ( - ) "the lorlei" ( ) "my gal sal" ( - ) "my gal sal" [outline for a movie script by helen dreiser] ( - ) "my gal sal" [by?] ( - ) "my gal sal" [a review by c. j. dyer] ( ) "our america" [proposal for radio series] ( - ) "the prince who was a thief" ( - ) "revolt or tobacco" [source material] ( - ) "revolt or tobacco" [synopses, outline, and summary] ( - ) "revolt or tobacco" [photographs from trip] ( ) "revolt or tobacco" [notes from trip] ( ) "revolt or tobacco" [material on super pictures, inc.] ( - ) "revolt or tobacco" ( - ) "sanctuary" [screen adaptation by helen dreiser] ( ) "solution" [outline, synopsis by elizabeth kearney, screen adaptation] ( - ) [see also "solution" in td writings: short stories ] "storm tossed" ( ) "stuck with the glue: a detective drama" ( ) "suggested script for anna sten" ( ) "suicide clinic" [screen adaptation by esther mccoy] ( - ) "the tables turned" ( ) "the tiger" ( ) "the tithe of the lord" [synopsis for a motion picture by elizabeth coakley] ( ) "the twenty wishes" ( ) "vaitua" ( - ) "women always knit" [by ladislas foodor, with comments and suggestions by td and elizabeth coakley] ( ) untitled ideas for screenplays ( - ) appendix f: manuscript and sheet music by paul dresser "after the battle" ( ) - copies "the army of half-starved men" ( ) - includes advertisement for "glory to god" inside front cover "ave maria" ( ) "a baby adrift at sea, song and chorus" ( ) "baby's tears, song and chorus" ( ) "the battery" ( ) "the boys are coming home to-day" ( ) "come tell me what's your answer, yes or no" ( ) - copies "coontown capers, two-step march (a negrosyncrasy)" ( ) - by theo. f. morse with characteristic verse by paul dresser "the curse of the dreamer, descriptive solo for baritone or mezzo-soprano" ( ) "the day that you grew colder, a retrospective ballad" ( ) - includes advertisement for "mary mine" "days gone by, song and chorus" ( ) "did you ever hear a nigger say 'wow'" ( ) - copies "don't forget your parents" ( ) - minor lyric changes and key change from version "don't forget your parents at home" ( ) "a dream of my boyhood's days" ( ) "every night there's a light, or, the light in the window pane" ( ) "gath'ring roses for her hair, sentimental song" (?) "glory to god, sacred song" ( ) "the green above the red" ( ) - copies, both include advertisement for "in good old new york town" on p. "he brought home another" ( ) - copies, one published by howley, haviland and co., the other by herbert h. taylor, inc. "he didn't seem glad to see me" ( ) "he fought for the cause he thought was right" ( ) "he loves me, he loves me not" ( ) "he was a soldier" ( ) "her tears drifted out with the tide" ( ) "i long to hear from you" ( ) "i send to them my love" ( ) "i was looking for my boy, she said; or decoration day" ( ) - copies "i wish that you were here tonight" ( ) "i wonder if she'll ever come back to me" ( ) "i wonder if there's someone who loves me" ( ) "if you see my sweetheart" ( ) "i'm going far away, love" ( ) "in dear old illinois" ( ) "in the sweet summer time" ( ) - copies "jim judson (from the town of hackensack)" ( ) "the judgement is at hand (paul dresser's last song)" ( ) "just to see mother's face once again" ( ) "the limit was fifty cents" ( ) "little fanny mcintyre, waltz song" ( ) "little jim" ( ) "the lone grave" ( ) "love's promise" ( ) "mary mine" ( ) - copies "mother will stand by me" ( ) "mr. volunteer; or, you don't belong to the regulars, you're just a volunteer" ( ) - includes advertisement for "the voice of the hudson" on p. "my flag! my flag!" ( ) "my gal sal; or, they called her frivolous sal" ( ) - includes sample quartet chorus inside front cover "my sweetheart of long, long ago" ( ) "never speak again" ( ) "niggah loves his possum; or, deed, he do, do, do" ( ) "the old flame flickers, and i wonder why" ( ) "on the banks of the wabash, far away" - one copy is missing the music but has p. dresser's autograph inside back cover, signature dated jan. , ; another copy (copyright, ) is complete and includes a sample of "you mother wants you home, bo y (and she wants you mighty bad)" inside front cover; other copies (copyright, ) and another ( ) which touts silent screen star madge evans "on the shore of havana, far away (a paraphrase)": to the melody of the famous song "on the banks of the wabash" ( ) "once every year" ( ) - copies "our country, may she always be right, but our country right or wrong" ( ) "perhaps you'll regret someday" ( ) - copies "a sailor's grave by the sea" ( ) - copies "say yes, love!" ( ) - copies, one with front cover missing "show me the way, sacred song" ( ) "the songs we loved, dear tom" ( ) "a stitch in time saves nine" ( ) "the story of the winds" ( ) "sweet savannah" ( ) - copies "take a seat old lady" ( ) "there's a ship" ( ) - copies "we are coming cuba coming" ( ) "we'll fight tomorrow mother" ( ) "when i'm away from you, dear" ( ) "when mammy's by yo' side" ( ) "when zaza sits on the piazza" ( ) - words by jos. farrell and music by henry frantzen; includes advertisement for "jim judson (from the town of hackensack)" inside front cover; on p. a note by theodore dreiser (t.d.) states that paul dresser w rote both the music and the lyrics "white apple blossoms" ( ) "wrap me in the stars and stripes" ( ) "your god comes first, your country next, then mother dear" ( ) "your mother wants you home, boy (and she wants you mighty bad)" ( ) "you're going far away, lad; or, i'm still your mother dear" ( ) "you'se just a little nigger, still you'se mine all mine" ( ) additional material letter - from emily grant von tetzel to the editor of "the world"; includes dresser's verses "the wolves of finance", dated march , clippings of lyrics - "mother told me so" and "the letter that never came" clipping - paul dresser's obituary, february , lyric sheets - typed and handwritten - "drink to your sweethearts dear," "i hate to leave you behind" and "the judgement is at hand"; sheets have notes by theodore dreiser picture of paul dresser cards from paul dresser's funeral (also "mementos") copyright certificate for "you are my sunshine sue" made in the name of theodore dreiser, dated / / ms. - "baby mine" ms. - "the great old organ" ms. - "marching through georgia" - includes typed lyric sheet for same ms. - "the people are marching by" ms. - "would i were a child again" ms. - "you are my sunshine sue" appendix g: works by others in the theodore dreiser papers description (folders) adams, henry. "the rule of phase applied to history" [ ] ( ) american civil liberties union. "legal tactics for labor's rights" [ ] ( ) "american literature in the u.s.s.r. ( - )" ( ) andrews, john william. "georgia transport" [ ] ( ) "apostle of naturalism" [ ] ( ) "an appreciation of dreiser's dawn " [ ] ( ) aragon, louis. "when we met dreiser"; burgum, edwin berry. "dreiser and his america" [ ] ( ) auchincloss, louis. "introduction" [to sister carrie ] [ ] ( ) auerbach, joseph. "authorship and liberty" [ ] ( ) avary, myrta lockett. "success—and dreiser" [ ] ( ) bardeleben, renate von. "dreiser's english virgil" [ ] ( ) bardeleben, renate von. "personal, ethnic, and national identity: theodore dreiser's difficult heritage" [ ] ( ) bardeleben, renate von. "the thousand and second nights in th-century american writing" [ ] ( ) barnett, james. "speeding up the workers" [ ] ( ) becker, george j. "theodore dreiser: the realist as social critic" [ ] ( ) beerman, herman, and emma s. beerman. "a meeting of two famous benefactors of the library of the university of pennsylvania—louis adolphus duhring and theodore dreiser" [ ] ( ) bein, albert. "straight from the heart" [ ] ( ) benezet, carol. "to theodore dreiser" [poem] ( ) beverly, judith de. "the genius: an appreciation of theodore dreiser" [ ] [poem] ( ) bingham, robert w. "buffalo's mark twain" [ ] ( ) bird, carol. "dreiser on censorship" [ ] ( ) birinsky, leon, and kurt siodmek. "whitechapel" ( ) bloom, marion. [account of a nurse's experiences in world war i] ( ) book find news [issue in tribute to td, march ] ( ) book find news, january ( ) book find news [issues with ads for td's books, may and december , april ] ( ) "books of the month: floyd dell and theodore dreiser" [ ] ( ) bornstein, josef. "ein dichter besichtigt russland" [ ] ( ) bourne, randolph. "the art of theodore dreiser" [ ] ( ) bowman, heath. hoosier, chap. [ ] ( ) boyd, willilam riley. "a contrast between the whipping post of 'darkest delaware' and the convict camps of georgia" [ ] [speech] ( ) braley, berton. "three--minus one" [ ] ( ) brand, milton. [review of the outward room ] ( ) braziller, george. "how will dreiser be honored?" [ ] ( ) bulletin of the league of american writers. [announcement of a dinner honoring td, ] ( ) c.k. "to a realist" [poem; see harvey, dorothy, "to t.d."] ( ) campbell, louise. "an afternoon in a boardwalk auction shop" ( ) campbell, louise. "career" ( ) campbell, louise. "i'm seventeen to-day!" ( ) [n.b.: other writings by louise campbell are in her correspondence file] Čapek, j. b. "interview o theodoru dreiserovi" [ ] ( ) carringer, robert, and scott bennett. "dreiser to sandberg: three unpublished letters" ( ) Čelakovský, f. l. ohlasy písní Českých [ ] ( ) [t]chekhov, anton. a bear [ ] ( ) chekhov, anton. the cherry garden [ ] ( ) chevalier, haekon m. "the intellectual in the american community" [ ] ( ) [clark, clara l.]. "challenge" [ ] ( ) [clark, clara l.]. "my solitude" [ ] ( ) clark, clara l. [review of beyond women, by maurice samuel, ] ( ) coakley, elizabeth. [ideas for scenes for a movie, ] ( ) conrad, lawrence. "theodore dreiser" [ ] ( ) cosulich, gilbert. "mr. dreiser looks at probation" [ ] ( ) cosulich, gilbert. "recent data on female criminals" [ ] ( ) cowley, malcolm. "the slow triumph of sister carrie" [ ] ( ) cunard, nancy. "black man and white ladyship" [ ] ( ) cuthbert, clifton. "an american tragedy" [ ] ( ) dash, mike. "charles fort and a man named dreiser" [after ] ( ) "david, the story of a soul" ( - ) [davis, mrs.]. [outline and script for a movie?] ( ) de kruif, paul. "jacques loeb" [fragment, ] ( ) dietrich, john h. "personal beliefs of noted men" [ ] ( ) [dostoyevsky, fyodor]. "the idiot" [playscript by powys?] ( - ) douglas, george. "for theodore dreiser" ( ) dowell, richard w. "'on the banks of the wabash': a musical whodunit" [ ] ( ) dowell, richard w. "'you will not like me, i'm sure" [ ] ( ) dreiser, edward m. "theodore dreiser" [ ] ( ) "dreiser: detroit's favorite author" [ ?] ( ) "dreiser in passaic" [ ] ( ) duis, perry. chicago: creating new traditions [ ] ( ) dumont, henry. [introduction to a biography of george sterling, with additional material by henry von sabern] ( ) dunsany, lord. "a night at an inn" [ ] ( ) elias, robert. "dreiser: bibliography and the biographer" [ ] ( ) elias, robert. "the library's dreiser collection" [ ] ( ) elias, robert. "theodore dreiser: a classic of tomorrow" [ca. ] ( ) esherick, wharton. "he helps me build a building" ( ) "f." "our civilization" ( ) farrell, james t. "the fate of writing in america" [ ] ( ) farrell, james t. "a night in august, " ( ) farrell, james t. "some correspondence with theodore dreiser" [ ] ( ) farrell, james t. "theodore dreiser" [ ] ( ) fast, howard. [introduction to best short stories of theodore dreiser, ] ( ) fawcett, james waldo. "the genius" [poem] ( ) ficke, arthur davison. "memory of theodore dreiser" [ ] ( ) ficke, arthur davison. "to theodore dreiser on reading 'the genius'" [ ] ( ) [review of the financier ] ( ) fort, charles. "had to go somewhere" [ ] ( ) fox, george l. "the panama canal as a business venture" [ ?] ( ) freeman, john. "an american tragedy" [review of td's book, ] ( ) "the french in syria" [after ] ( ) friedman, stanley j. "theodore dreiser and the dispossessed" [ ] ( ) furmańczyk, wiesĺaw. "a naturalist's view of ethics" [ ] ( ) furmańczyk, wiesĺaw. "theodore dreiser's views on religion in the light of his philosophical papers" [ ] ( ) gerber, philip l. "dreiser meets balzac at the 'allegheny carnegie'" [ ] ( ) gerber, philip l. "dreiser's financier: a genius" [ ] ( ) gerson, thomas. "for theodore dreiser" [poem] ( ) gibson, pauline. "the ghost of benjamin sweet" [ ] ( ) gilman, lawrence. "an author's famous friends" ( ) glaenzer, richard butler. "dreiser" [ ] [poem] ( ) goldschmidt, alfonso. "holitscher und dreiser" [ ] ( ) goldschmidt, alfonso and lina goldschmidt. [comments on td, in spanish, ] ( ) goldschmidt, lina. "theodore dreiser" [in german] ( ) goodman, lillian rosedale. "you have my heart" [song] ( ) griffin, joseph. "butcher rogaum's door': dreiser's early tale of new york" [ ] ( ) griffin, joseph. "dreiser revealed and restored" [ ] ( ) griffin, joseph. "theodore dreiser visits toronto" [ ] ( ) grosch, anthony r. "social issues in early chicago novels" [ ] ( ) halstead, blanche. "and yet?" [poem] ( ) halstead, blanche. "to a rose" [poem] ( ) hamilton, james burr (ed.). "the whipping block: a study of english education" [ ?] ( ) hapgood, hutchins. "out of the darkness" [a dialogue] ( ) hapgood, hutchins. "the primrose path" [play] ( ) "harlan county" and "revolt or tobacco" ( ) harris, marguerite tjader. "call for a re-issuing of dreiser's bulwark " [after ] ( ) harris, marguerite tjader. "dreiser's popularity in russia" [ ] ( ) harris, marguerite tjader. "dreiser's style" ( - ) harris, marguerite tjader. "god as looser" ( ) harris, marguerite tjader. "theodore dreiser loved science" [in russian, ] ( ) hartmann, sadakichi. "passport to immortality" [ ] ( ) [harvey, alexander]. [on the suppression of the "genius," ] ( ) harvey, dorothy dudley. "to t.d." ( ) [harvey], dorothy dudley. forgotten frontiers: dreiser and the land of the free [galleys, ] ( ) hazlitt, henry. "our greatest authors: how great are they?" [ ] ( ) hidaka, masayoshi. [ articles on td in japanese] ( ) hill, lawrence. [paper written for english course at yale university, ] ( ) hoffman, helene. "this myth virginity" ( ) [holloway, mrs.?]. ancient cosmologies and symbolisms ( - ) holtz, sophie. "a devil personified" ( ) huddleston, sisley. [essay in back to montparnasse ] ( ) hurst, fannie. "back street" [outline for a movie script by?] ( ) huth, john e., jr. "dreiser and success: an additional note" [ ] ( ) huth, john e., jr. "theodore dreiser, success monger" [ ] ( ) huth, john e., jr. "theodore dreiser: `the prophet'" [ ] ( ) international labor defense. "the international labor defense: its constitution and organization resolution" [ ] and "death penalty" [ ] ( ) [introductory remarks by? on appearance together of rabindranath tagore and ruth st. denis] ( ) jarmuth, edith delong. "to theodore dreiser" [poem] ( ) jerome, helen. "dreiser: the man of sorrow" [poem] ( ) kalinka, maga. "to t.d." [poem] ( ) kapustka, bruce. "shadows of dreams and souls" [poem] ( ) kazin, alfred. "the lady and the tiger: edith wharton and theodore dreiser" [ ] ( ) keeffe, grace m. "novelistas de la nueva generación: louis bromfield" [ ] ( ) king, alexandra c. "theodore dreiser: an impression" [poem] ( ) knight, eric m. "pimpery—twentieth century" ( ) kraft, h. s. "dreiser's war in hollywood" [ ] ( ) kussell, sally. "the cheat" ( ) [kussell, sally.]. "the love of lizzie morris" ( ) kuttner, alfred b. "the lyrical mr. dreiser" [ ] ( ) la follette, suzanne. "the modern maecenas" [ ] [fragment] ( ) latour, marian. "to t.d." [poem] ( ) leberthon, ted. "this side of nirvana" [ s] ( ) le clercq, j. g. c., and w. h. chamberlin. "books, art and morality" [ ] ( ) lee, gerald stanley. [from "the lost art of reading," / ?] ( ) lengel, william c. "books that made me what i am today" [ ] ( ) lengel, william c. "the `genius' himself" [ ] ( ) lengel, william c. "theodore dreiser" [poem] ( ) llona, victor. "les u.s.a. jugés par théodore dreiser" [ ] ( ) logan, chass. "sister carrie" [review] ( ) lord, david. "dreiser today" [ ] ( ) lyon, harris merton. "the chorus girl" (see box , folder ) lyon, harris merton. "eve and the walled-in boy" ( ) lyon, harris merton. "from fancy's point of views" ( ) lyon, harris merton. "an unused pattlesnake" ( ) lyon, harris merton. "the weaver who clad the summer" ( ) [mccord, donald p.]. "one night" [by "michael vivadieu"] ( ) [mccord, donald p.]. "we, the people" [by "michael vivadieu"] ( ) mccord, p[eter] b. "niangua's tears" ( ) mccoy, esther. "outward journey" ( ) [n.b.: other writings by esther mccoy are in her correspondence file] mcdonald, edward. "dreiser before `sister carrie'" [ ] ( ) markham, kirah. "k.m. to th.d." and "to my love" ( ) [markham, kirah?]. "sisters" [play] ( - ) [markham, kirah?]. [untitled play] ( ) masters, edgar lee. "the return" [ ] ( ) masters, edgar lee. "taking dreiser to spoon river" [ ] ( ) masters, edgar lee. "theodore dreiser—a portrait" [ ] ( ) masters, edgar lee. "theodore the poet" ( ) masters, marcia lee. "ghostwriting for theodore dreiser" [ ] ( ) mencken, h. l. "american street names [ ] ( ) mencken, h. l. "the birth of new verbs" [after ] ( ) mencken, h. l. "bulletin on `hon'" [ ] ( ) mencken, h. l. "designations for colored folk" [after ] ( ) mencken, h. l. [review of a gallery of women, ] ( ) mencken, h. l. "names for americans" [ ] ( ) mencken, h. l. "some opprobrious nicknames" [ ] ( ) mencken, h. l. "war words in england" [ ] ( ) mencken, h. l. "what the people of american towns call themselves" [ ] ( ) mencken, h. l. [statement used in td's memorial service, ] ( ) michail gourakin, by lappo danileveskaya [book review by?] ( ) miller, william e., and neda m. westlake (eds.). "essays in honor of theodore dreiser's sister carrie " [special issue of  library chronicle, ] ( ) minor, robert. [address to dec. meeting of the national committee for the defense of political prisoners] ( ) mizuguchi, shigeo. "the dreiser collection at the university of pennsylvania" [in japanese] ( ) mizuguchi, shigeo. [article on td in japanese, ] ( mooney, martin. [statement on his firing by universal studios, after march ] ( ) mordell, albert. "my relations with theodore dreiser" [ ] ( - mouri, itaru. [ articles on td in japanese, with synopses for of the in english, - ] ( ) national grays harbor committee. defend civil rights in grays harbor county" [ ] ( ) "notes of mr. theodore dreiser's ideas on: the stabilizing of personal emotions " ( ) oppenheim, james. "theodore dreiser" [poem] ( ) palmer, erwin. "theodore dreiser, poet" [ ] ( ) "the passing of pan" [poem] ( ) patel, rajni. "brother india" [ ] [preface by paul robeson] ( ) paz, magdeleine. "vue sur l'amerique" [after ] ( ) perdeck, a. "realism in modern american fiction" [ ] ( ) perfilieff, vladimir. [untitled account of incidents in the far north among the eskimo] ( ) [perfilieff, vladimir]. [untitled essay] ( ) pizer, donald. "dreiser's novels: the editorial problem" [ ] ( ) poe, edgar allen. "the tell tale heart" [radio dramatization by ?, ] ( ) [poetry by ?] ( ) "policy" and "note of separate comment" ( ) "the pool" [poem] ( ) powys, john cowper. "nietzsche" [notebook] ( ) powys, john cowper. wolf solent [ ] [bound page proofs] ( ) "public sucker number one" [by i. n. weber or william c. lengel, after ] ( ) raja, l. jeganatha (ed.). journal of life, art and literature [special issue on theodore dreiser, ] ( ) reilly, william j. "of the screen by the screen and for the screen" [ ] ( ) reis, irving. "st. louis blues" [ ] [radio play] ( ) riggs, lynn. "the lonesome west" [ ] [play] ( - ) robinson, leroy. "john howard lawson's struggle with sister carrie " [ ] ( ) "romance" [plot for a play] ( ) roosevelt, franklin delano. "our realization of tomorrow" [ ] ( ) root, waverly lewis. [review of french translation of "nigger jeff," by victor llona, in contemporary foreign novelists, ] ( ) rosenthal, elias. "theodore dreiser's 'genius' damned" [ ] ( ) salzman, jack. "the publication of sister carrie : fact and fiction" [ ] ( ) salzman, jack. (ed.). modern fiction studies [special issue on theodore dreiser, ] ( ) [sayre, kathryn]. "a cosmos of women" ( ) [sayre, kathryn]. "the themes of dreiser" ( ) [n.b.: other writings by kathryn sayre are in her correspondence file] [scottsboro trial, press release and notes, ] ( ) scudder, raymond. "samuel f. b. morse" [ ] ( ) sebestyén, karl. "theodore dreiser at home" [ ] ( ) seymour, katherine. "famous loves: cleopatra: episode no. " [ ] ( ) seymour, katherine. "famous loves: episode : heloise and abelard" ( ) "seymour seligman on 'theodore dreiser and his gallery of women'" ( ) "shaw on dreiser" [ ] ( ) shively, henry l. "how hickey escaped the fate of lot's wife" ( ) [review of sister carrie in  style and american dressmaker, ] ( ) smith, edward h. "dreiser—after twenty years" [ ] ( ) smith, lorna. "theodore dreiser" [ essays] ( ) smith, mary elizabeth. "theodore dreiser: a great american" ( ) spector, frank. "story of the imperial valley" [ ] ( ) "stars at a glance" ( ) sterling, george. "everest" [poem] ( ) sterling, george. "intimations of infinity" ( ) sterling, george. "sonnets to craig" [ ] ( ) sterling, george. "strange waters" [poem] ( ) stevenson, lionel. "george sterling's place in modern poetry" [ ] ( ) "story for a musical comedy" ( ) "suggestions for radio playwrights: campana's 'first nighter' 'grand hotel' broadcasts" ( ) tatum, anna p. "christ petrified" [poem] ( ) taylor, g. r. stirling. "theodore dreiser" [ ] ( ) "theodore dreiser" ( ) "theodore dreiser" [poem] ( ) "theodore dreiser: court reporter" ( ) "theodore dreiser centenary exhibit" [catalog, ] ( ) theodore dreiser centenary issue of the library chronicle [ ] ( ) thomas, norman. "will fascism come to america?" [ ] ( ) "to theodore dreiser author of 'chains'" [poem] ( ) "tom kromer's autobiography" ( ) troy, william. "the eisenstein muddle" [ ] ( ) "under currents" ( ) wadsworth, p. beaumont. "america ueber alles" [ ] ( ) warren, whitney. "'the vicious circle'" [ ] ( ) weaver, raymond. "a complete handbook of opinion" [ ] ( ) "the weavers" [play] ( ) [williams, alexander]. [essay in response to tragic america ] ( ) [williams, estelle kubitz?]. "an aristocrat" ( ) [williams, estelle kubitz?]. "the austrian tangle" ( ) [williams, alexander]. [autobiographical account written after ] ( ) [williams, alexander]. "bee" ( ) [williams, alexander]. [diary notes from july - sept. ] ( ) [williams, alexander]. [diary notes from - march ] ( ) [williams, alexander]. "a dream" ( ) [williams, estelle kubitz?]. "the heir" ( ) [williams, alexander]. "an idyl" ( ) [williams, alexander]. "misplaced ambition" ( ) [williams, alexander]. "my stage experiences" [by "miss nonentity"] ( ) [williams, alexander]. "the one hundred hoddy-doddys" ( ) [williams, alexander]. [poems, jokes] ( ) [williams, estelle kubitz?]. "tissemao and the cuttlefish" ( ) [williams, estelle kubitz?]. [untitled story] ( ) woljeska, helen. "the end of the ideal" [ ) [play] ( ) yewdall, merton s. "theodore dreiser—man and scientific mystic" ( ) zanine, louis j. "from mechanism to mysticism: theodore dreiser and the religion of science" [ ] ( - ) [ untitled typescripts] ( - ) cassette tape of lecture on td by fred c. harrison, and note to myrtle butcher, nov. ( ) "murder on big moose?": videotape and note from trina carman, sept. ( ) return to top » © university of pennsylvania | dmcknigh@pobox.upenn.edu     none none commonplace.net commonplace.net data. the final frontier. infrastructure for heritage institutions – open and linked data in my june post in this series, &# ;infrastructure for heritage institutions – change of course&# ;&# ;&# ; , i said: &# ;the results of both data licences and the data quality projects (object pid’s, controlled vocabularies, metadata set) will go into the new data publication project, which will be undertaken in the second half of . this project is aimed at publishing our collection data as open and linked data in various formats via various channels. a [&# ;] infrastructure for heritage institutions – ark pid’s in the digital infrastructure program at the library of the university of amsterdam we have reached a first milestone. in my previous post in the infrastructure for heritage institutions series, &# ;change of course&# ;, i mentioned the coming implementation of ark persistent identifiers for our collection objects. since november , , ark pid&# ;s are available for our university library alma catalogue through the primo user interface. implementation of ark pid&# ;s for the other collection description systems [&# ;] infrastructure for heritage institutions – change of course in july i published the first post about our planning to realise a “coherent and future proof digital infrastructure” for the library of the university of amsterdam. in february i reported on the first results. as frequently happens, since then the conditions have changed, and naturally we had to adapt the direction we are following to achieve our goals. in other words: a change of course, of course. &# ;projects&# ; i will leave aside the [&# ;] infrastructure for heritage institutions – first results in july i published the post&# ;infrastructure for heritage institutions in which i described our planning to realise a&# ;“coherent and future proof digital infrastructure” for the library of the university of amsterdam. time to look back: how far have we come? and time to look forward: what&# ;s in store for the near future? ongoing activities i mentioned three &# ;currently ongoing activities&# ;:&# ; monitoring and advising on infrastructural aspects of new projects maintaining a structured dynamic overview [&# ;] infrastructure for heritage institutions during my vacation i saw this tweet by liber about topics to address, as suggested by the participants of the liber conference in dublin: it shows a word cloud (yes, a word cloud) containing a large number of terms. i list the ones i can read without zooming in (so the most suggested ones, i guess), more or less grouped thematically: open scienceopen dataopen accesslicensingcopyrightslinked open dataopen educationcitizen science scholarly communicationdigital humanities/dhdigital scholarshipresearch assessmentresearch [&# ;] ten years linked open data this post is the english translation of my original article in dutch, published in meta ( - ), the flemish journal for information professionals. ten years after the term “linked data” was introduced by tim berners-lee it appears to be time to take stock of the impact of linked data for libraries and other heritage institutions in the past and in the future. i will do this from a personal historical perspective, as a library technology professional, [&# ;] maps, dictionaries and guidebooks interoperability in heterogeneous library data landscapes libraries have to deal with a highly opaque landscape of heterogeneous data sources, data types, data formats, data flows, data transformations and data redundancies, which i have earlier characterized as a “data maze”. the level and magnitude of this opacity and heterogeneity varies with the amount of content types and the number of services that the library is responsible for. academic and national libraries are possibly dealing with more [&# ;] standard deviations in data modeling, mapping and manipulation or: anything goes. what are we thinking? an impression of elag this year’s elag conference in stockholm was one of many questions. not only the usual questions following each presentation (always elicited in the form of yet another question: “any questions?”). but also philosophical ones (why? what?). and practical ones (what time? where? how? how much?). and there were some answers too, fortunately. this is my rather personal impression of the event. for a [&# ;] analysing library data flows for efficient innovation in my work at the library of the university of amsterdam i am currently taking a step forward by actually taking a step back from a number of forefront activities in discovery, linked open data and integrated research information towards a more hidden, but also more fundamental enterprise in the area of data infrastructure and information architecture. all for a good cause, for in the end a good data infrastructure is essential for delivering high [&# ;] looking for data tricks in libraryland ifla annual world library and information congress lyon &# ; libraries, citizens, societies: confluence for knowledge after attending the ifla library linked data satellite meeting in paris i travelled to lyon for the first three days (august - ) of the ifla annual world library and information congress. this year’s theme “libraries, citizens, societies: confluence for knowledge” was named after the confluence or convergence of the rivers rhône and saône where the city of [&# ;] none none none none none none none none none none none none none library hat library hat http://www.bohyunkim.net/blog/ blockchain: merits, issues, and suggestions for compelling use cases * this post was also published in acrl techconnect.*** blockchain holds a great potential for both innovation and disruption. the adoption of blockchain also poses certain risks, and those risks will need to be addressed and mitigated before blockchain becomes mainstream. a lot of people have heard of blockchain at this point. but many are [&# ;] taking diversity to the next level ** this post was also published in acrl techconnect on dec. , .*** getting minorities on board i recently moderated a panel discussion program titled “building bridges in a divisive climate: diversity in libraries, archives, and museums.” participating in organizing this program was interesting experience. during the whole time, i experienced my perspective constantly shifting [&# ;] from need to want: how to maximize social impact for libraries, archives, and museums at the ndp at three event organized by imls yesterday, sayeed choudhury on the “open scholarly communications” panel suggested that libraries think about return on impact in addition to return on investment (roi). he further elaborated on this point by proposing a possible description of such impact. his description was that when an object or [&# ;] how to price d printing service fees ** this post was originally published in acrl techconnect on may. , .*** many libraries today provide d printing service. but not all of them can afford to do so for free. while free d printing may be ideal, it can jeopardize the sustainability of the service over time. nevertheless, many libraries tend to worry [&# ;] post-election statements and messages that reaffirm diversity these are statements and messages sent out publicly or internally to re-affirm diversity, equity, and inclusion by libraries or higher ed institutions. i have collected these &# ; some myself and many others through my fellow librarians. some of them were listed on my blog post, &# ;finding the right words in post-election libraries and higher ed.&# ; [&# ;] finding the right words in post-election libraries and higher ed ** this post was originally published in acrl techconnect on nov. , .*** this year’s election result has presented a huge challenge to all of us who work in higher education and libraries. usually, libraries, universities, and colleges do not comment on presidential election result and we refrain from talking about politics at work. but [&# ;] say it out loud – diversity, equity, and inclusion i usually and mostly talk about technology. but technology is so far away from my thought right now. i don’t feel that i can afford to worry about internet surveillance or how to protect privacy at this moment. not that they are unimportant. such a worry is real and deserves our attention and investigation. but [&# ;] cybersecurity, usability, online privacy, and digital surveillance ** this post was originally published in acrl techconnect on may. , .*** cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. many of us know that it is difficult to keep things private on the internet. the internet was invented to share things with others [&# ;] three recent talks of mine on ux, data visualization, and it management i have been swamped at work and pretty quiet here in my blog. but i gave a few talks recently. so i wanted to share those at least. i presented about how to turn the traditional library it department and its operation that is usually behind the scene into a more patron-facing unit at the recent american library association midwinter [&# ;] near us and libraries, robots have arrived ** this post was originally published in acrl techconnect on oct.  , .*** the movie, robot and frank, describes the future in which the elderly have a robot as their companion and also as a helper. the robot monitors various activities that relate to both mental and physical health and helps frank with various house chores. [&# ;] none dan cohen dan cohen vice provost, dean, and professor at northeastern university when we look back on , what will we see? it is far too early to understand what happened in this historic year of , but not too soon to grasp what we will write that history from: data—really big data, gathered from our devices and ourselves. sometimes a new technology provides an important lens through which a historical event is recorded, viewed, and remembered. [&# ;] more than that “less talk, more&# ;grok.” that was one of our early mottos at&# ;thatcamp, the humanities and technology camp, which started at the roy rosenzweig center for history and new media at george mason university in . it was a riff on “less talk, more rock,” the motto of waaf, the hard rock station in worcester, massachusetts. and [&# ;] humane ingenuity: my new newsletter with the start of this academic year, i&# ;m launching a new newsletter to explore technology that helps rather than hurts human understanding, and human understanding that helps us create better technology. it&# ;s called humane ingenuity, and you can subscribe here. (it&# ;s free, just drop your email address into that link.) subscribers to this blog know [&# ;] engagement is the enemy of serendipity whenever i&# ;m grumpy about an update to a technology i use, i try to perform a self-audit examining why i&# ;m unhappy about this change. it&# ;s a helpful exercise since we are all by nature resistant to even minor alterations to the technologies we use every day (which is why website redesign is now a synonym [&# ;] on the response to my atlantic essay on the decline in the use of print books in universities i was not expecting—but was gratified to see—an enormous response to my latest piece in the atlantic, &# ;the books of college libraries are turning into wallpaper,&# ; on the seemingly inexorable decline in the circulation of print books on campus. i&# ;m not sure that i&# ;ve ever written anything that has generated as much feedback, commentary, and [&# ;] what’s new season wrap-up with the end of the academic year at northeastern university, the library wraps up our what&# ;s new podcast, an interview series with researchers who help us understand, in plainspoken ways, some of the latest discoveries and ideas about our world. this year&# ;s slate of podcasts, like last year&# ;s, was extraordinarily diverse, ranging from the threat [&# ;] when a presidential library is digital i&# ;ve got a new piece over at the atlantic on barack obama&# ;s prospective presidential library, which will be digital rather than physical. this has caused some consternation. we need to realize, however, that the obama library is already largely digital: the vast majority of the record his presidency left behind consists not of evocative handwritten [&# ;] robin sloan’s fusion of technology and humanity when roy rosenzweig and i wrote digital history years ago, we spent a lot of time thinking about the overall tone and approach of the book. it seemed to us that there were, on the one hand, a lot of our colleagues in professional history who were adamantly opposed to the use of digital [&# ;] presidential libraries and the digitization of our lives buried in the recent debates (new york times, chicago tribune, the public historian) about the nature, objectives, and location of the obama presidential center is the inexorable move toward a world in which virtually all of the documentation about our lives is digital. to make this decades-long shift—now almost complete—clear, i made the following infographic [&# ;] kathleen fitzpatrick’s generous thinking generosity and thoughtfulness are not in abundance right now, and so kathleen fitzpatrick&# ;s important new book, generous thinking: a radical approach to saving the university, is wholeheartedly welcome. the generosity kathleen seeks relates to lost virtues, such as listening to others and deconstructing barriers between groups. as such, generous thinking can be helpfully read alongside [&# ;] none the dream coach. a celebration of women writers the dream coach by anne parrish, - and dillwyn parrish, - . new york: the macmillan company, . copyright not renewed. a newbery honor book, . the dream coach the macmillan company new york ˙ boston ˙ chicago ˙ dallas atlanta ˙ san francisco macmillan & co., limited london ˙ bombay ˙ calcutta melbourne the macmillan co. of canada, ltd. toronto the dream coach fare: forty winks coach leaves every night for no one knows where * * and here is told how a princess, a little chinese emperor, a french boy, & a norwegian boy took trips in this great coach * by anne and dillwyn parrish * * with pictures & a map by the authors new york the macmillan company ☆ ☆ mcmxxiv ☆ ☆ copyright, . by the macmillan company. set up and electrotyped. published, september, . printed in the united states of america to everett and roland jackson contents   page the dream coach the seven white dreams of the king's little daughter goran's dream a bird-cage with tassels of purple and pearls     (three dreams of a little chinese emperor) "king" philippe's dream the dream coach the dream coach if you have been unhappy all the day, wait patiently until the night: when in the sky the gentle stars are bright the dream coach comes to carry you away. great coach, great coach, how fat and bright your sides, to please the child who rides! painted with funny men – see that one's hose, how blue! how red and long is that one's nose! and under this one's arm a flapping cock! great dandelions tell us what o'clock with silver globe much bigger than the moon — dream coach, come soon! come soon! what pretty pictures! angels at their play, and brown and lilac butterflies, and spray of stars, and animals from far away, grey elephants, a bright pink water bird; things lovely and absurd. as the wheels turn, they wake to lovely sound, musical boxes – as the wheels go round they play a little silver spray of notes: "swift runs the river" – "bluebells in the wood" — "the waterfall" – "the child who has been good" — like splash of foam at keel of little boats. under a sky of duck-egg green have you not seen the hundred misty horses that delight to draw the coach all night, and the queer little driver sitting high and singing to the sky? his hat is as tall as a cypress tree, his hair is as white as snow; his cheeks and his nose are as red as can be; he sings: "come along! come along with me!" let us go! let us go! his coat is speckledy red and black, his boots are as green as a beetle's back, his beard has a fringe of silver bells and scarlet berries and small white shells, and as through the night the dream coach gleams, the song he sings like a banner streams: "nothing is real in all the world, nothing is real but dreams." through sound of rain the dream coach gallops fast. all those that we have loved are riding there: i hear their laughter on the misty air. i wait for you – i have been waiting long: far off i hear the driver's tiny song – oh, dream coach! come at last! (from knee-high to a grasshopper.) when the driver of the dream coach reached the last small star in the sky, he unharnessed his hundred misty horses and put them out to pasture in the great blue meadow of heaven. it was well he reached the end of his journey when he did, for in another moment a mounting wave of sunlight and wind, rushing up from the world far below, blew out the silver-white fiame of the star so that no one could follow the strange driver and his strange coach to their resting place. resting place? what a mistake! the driver of the dream coach never rests. you see, there are so many things to do even when he is carrying no passengers. there are new dreams to invent: queer dreams, funny dreams, fairy dreams, goblin dreams, happy dreams, exciting dreams, short dreams, long dreams, brightly colored dreams, and dreams made out of shadows and mist that vanish as soon as one opens one's eyes. then there is the very bothersome matter of keeping the records straight, records of those who deserve good dreams, those who need cheering with ridiculous dreams, and those, alas, who have been bad and naughty and have to be punished (how the little driver hates this!) with nightmares. it is hard to keep all those dreams from getting mixed up, there are so many of them. indeed, sometimes, they do get mixed up, and a good child, who was meant to have a dream as pretty as a pansy or as funny as a frog, gets a nightmare by mistake. but the driver of the dream coach tries as hard as he possibly can never to let this happen. he has so very much to do that he never would catch up with his work no matter how quickly his beautiful horses galloped from star to star, from world to world, if there was not some one to help him. there are little angels who help the driver of the dream coach. in their gold and white book they keep a record of every one on earth. as soon as the driver of the dream coach had unharnessed his horses he went to these angels and planned his next trip. what a busy night it was to be! if i should use all the paper and all the pencils in the world i could not begin to tell you about all the dreams he arranged to carry to the sleeping world. and yet there was one child who was nearly forgotten, a little princess whose name had been written at the top of a new page which the driver had neglected to turn in his hurry. "surely you are not going to forget the little princess on her birthday!" pleaded the little angels, turning the page. "oh, dear!" said the driver. "that will never do; now, will it? and yet – i simply can't pack another dream into the coach. i'm sorry, but i'm afraid – " "oh, dear!" echoed the angels. "perhaps – " just then one of the youngest angels, who happened to be leaning over the parapet of paradise, saw the princess begin to cry, and took in the situation instantly. so he hurried to the others and suggested that he himself should carry a dream to the little princess. the driver of the dream coach thought this was a splendid idea and thanked him again and again for his help. that is how the seven white dreams of the king's little daughter were carried to her by an angel, and as you know (or if you don't, i will tell you) the dreams carried in the moonbeam basket of the angels are the most beautiful of all. what did the princess dream? that you shall hear. i cannot remember all the names of the king's little daughter, and indeed few can. the archbishop who christened her says that he can, but he is so great and so deaf a dignitary that no one would think of asking him to prove it. they are all there, twelve pages of them, in the great book where are recorded the baptisms of all the royal babies, so that you can look for yourself if none of the ones i can remember – angelica mary delphine violet candida pamelia petronella victoire veronica monica anastasia yvonne – happen to please you. it was the fifth birthday of the little princess, and there were to be great celebrations in her honor. fireworks would blossom in the night sky, and in the gardens lanterns were hung like bubbles of colored light from white rose tree to red, while the great fountains would turn from pink to mauve, from mauve to azure, to amber, and to green, as they flung up slender stems and great spreading lacy fronds of water. every one from the king down to the smallest kitchen-maid had new clothes for the occasion, and the chief cook had created a birthday cake iced with fairy grottoes and gardens of spun sugar, so huge and so heavy that the princess's ten pages in their new sky-blue and silver liveries, staggered under the weight of it. the little princess had a new gown of white satin, sewn so thickly with pearls that it was perfectly stiff, and stood as well without her as when she was inside it. it was standing by her bedside when the bells of the city awoke her on her birthday morning, together with her silver bath shaped like a great shell, and her nine lace petticoats, and her hoops to go over the petticoats, and her little white slippers on a cushion of cloth-of-silver, and her whalebone stays, and her cobweb stockings, and her ten ladies-in-waiting, grand duchesses every one. when she opened her blue eyes they all swept her the deepest curtsies, their skirts of bright brocade billowing up about them, and said together: "long life and happiness to your serene highness!" and then the first grand duchess popped her out of bed and into her bath, where she got a great deal of soap in the princess's eyes while she conversed in a most respectful and edifying manner. the second grand duchess, who was lady-in-waiting-in-charge-of-the-imperial-towel, was even more respectful, and nearly rubbed the princess's tiny button of a nose entirely off her face. the third grand duchess brushed and combed the little duck tails of yellow silk that covered the royal head; and oh, how she did pull! the fourth grand duchess was lady-in-waiting-in-charge-of-the-imperial-shift, and as she was rather old and slow, although extremely noble, the princess grew cold indeed before the shift covered up her little pink body. the fifth grand duchess put on the rigid stays. the sixth put on the stockings and slippers. the seventh was very important and gave herself airs, for the nine lace petticoats were her concern. the eighth grand duchess was lady-in-waiting-in-charge-of-the-imperial-hoops. the ninth put on the little princess the dress of satin and pearls, that glowed softly like moonlit drops of water. and the tenth grand duchess, the oldest and ugliest and noblest and crossest and most respectful of them all, placed on the yellow head the little frosty crown of diamonds. then the princess's father confessor, a very noble prince of the church, dressed in violet from top to toe, came in between two little boys in lace, and said a long prayer in latin. it was so long that, i am sorry to have to tell you, right in the middle the princess yawned, so of course another long prayer had to be said to ask heaven to overlook such shocking wickedness on the part of her highness. then the chief-steward-in-attendance-on-the-princess brought her breakfast – bread and milk in a silver porringer. the little princess had hoped for strawberries, as it was her birthday, but the chief gardener was saving every strawberry in the royal gardens for the great birthday banquet that was to be held that evening. then the little princess went to say good morning to her mother and father, and this is the way she went. first came two heralds in forest green, blowing on silver trumpets. then came the father confessor and his little lace-covered boys. then came the ladies-in-waiting in their bright brocades, with feathers in their powdered hair, and after each lady came a little black page to carry her handkerchief on a satin cushion. the ten pages of the princess were next, and after them came the royal baby's own regiment of dragoons in white and scarlet. and last came four gigantic blacks the nine lace petticoats were her concern. wearing white loin cloths and enormous turbans of flamingo pink, and carrying a great canopy of cloth-of-silver fringed with pearls, and under this, very tiny, and looking, in her spreading gown, like a little white hollyhock out for a walk, came the princess. after she had curtsied, and kissed the hands of her royal parents, her father gave her a rope of milk-white pearls and her mother gave her a ruby as big as a pigeon's egg, both of which were instantly locked up in the royal treasury. they then bestowed upon her, in addition to her other titles, that of grand duchess of pinchpinchowitz, which took so long to do that when she had said thank you it was time for lunch, which was just the same as breakfast, except that this time the porringer was gold. after lunch the prime minister read the princess an illuminated birthday greeting from her loyal subjects, which ran along so that the ladies-in-waiting nearly yawned their heads off behind their painted fans, and the princess had a nice little nap, and dreamed that there would be strawberries for supper. but instead there was bread and milk in a porringer covered with turquoises and moonstones. then, as the younger ladies-in-waiting were thinking of the gentlemen-of-the-court who would be waiting for them among the rose trees and yew hedges, to watch the colored water of the fountains and listen to the harps and flutes, and as the older ladies-in-waiting were thinking of comfortable seats out of a draught in the state ball room, and having the choicest morsels of roasted peacock and larks' tongue pie and frozen nectarines, they popped the princess into bed pretty promptly – indeed, an hour earlier than usual – and went off to celebrate her birthday. the room in which the little princess lay was as big as a church, and the great bed was as big as a chapel. four carved posts as tall as palm trees in a tropic jungle, held a canopy of needlework where hunters rode and hounds gave chase and deer fled through dark forests. below this lay the broad smooth expanse of silken sheet and counterpane, and in the midst, as little and alone as a bird in an empty sky, lay the king's little daughter. one large tear rolled down her round pink cheek, and then another. the long dull day had tired her, and the great dim room frightened her, and she wanted to see the fireworks she had heard her pages whispering about. she sat up among her lace pillows, and her tears went splash, splash, on the embroidered flowers and leaves of her coverlet. one of the youngest angels happened to be leaning over the parapet of paradise when the princess began to cry, and he took in the situation instantly, and hurried off to his heavenly playmates to tell them about it. "it is her birthday," he said, "and no one has given her as much as a red apple or a white rose – only silly old rubies and pearls that she wasn't even allowed to play marbles with! and now they have left her to weep in the dark while they dance and feast! i shall go down to her and sit by her bed till her tears are dry, and take her a white dream as a gift." "oh, let me send a dream too!" cried another angel. "and let me!" "and let me!" so that by the time the little angel was ready to start to earth there were seven white dreams to be taken as birthday gifts from heaven, and he had to weave a basket of moonbeams to carry them in. that night the princess dreamed that she was a daisy in a field, dancing delicately in the wind among other daisies as thick as the stars in the milky way. feathery grasses danced with them, and yellow butterflies danced above, and the larks in the sky flung down cascades of lovely notes that scattered like spray on the joyous wind. some poor little girls were playing in the field. their feet were bare and their faded frocks were torn, but they danced and sang too. there came a rumbling like thunder, and through a gap in the hawthorn hedge the children and the daisies saw the king's little daughter driven past in her great scarlet coach drawn by eight dappled horses. they could see the little princess sitting up very straight with her crinoline puffing about her and her crown on her head, and after she had passed all the children played that they were princesses, making daisy crowns for their heads, and hoops of brier boughs to hold out their limp little petticoats. the next day the princess looked in vain for a daisy as she took her morning constitutional in the royal gardens. there were roses and lilies, blue irises, and striped red and yellow carnations tied to stakes, all stiff and straight. "hold up your head, serene highness!" snapped one of the ladies-in-waiting, who had had too many cherry tarts at too late an hour the night before. but daisies danced in the princess's heart.   the next night the princess dreamed that she was a little white cloud afloat in the bright blue sky. she floated over the blue sea and the white sand, and over black forests of whispering pines, and over a land where fields of tulips bloomed for miles, in squares of lovely colors, delicate rose and mauve and purple, coppery pink and creamy yellow, with canals running through them like strips of old, dark looking-glass. she floated over rye fields turning silver in the wind, and over nuns at work in their walled gardens, and finally over a great grim palace where a king's little daughter lived. "i would rather be free and afloat in the sky," thought the small white cloud.   when she took the air the next day, she looked up to see if any white clouds were in the sky. "her highness is growing very proud," said the ladies-in-wait- ing. "she holds her nose up in the air as a king's daughter should." on the third night, the princess dreamed she was a little lamb skipping and nibbling the new green grass in a meadow where hundreds of lilies of the valley were in bloom. they were still wet and sparkling with rain, but now the sun shone and a beautiful rainbow arched above the meadow and the lilies of the valley and the happy little lamb. through the rest of her life the gentleness of the lamb lay in the heart of the princess.   the next night she dreamed that she was a white butterfly drifting with other butterflies among the tree ferns and orchids of the jungle, gentle and safe from harm, although serpents lay among the branches of the trees and lions and tigers roamed through the green shadows. a white butterfly flew in at her window the next day. "a moth! a moth!" cried the ladies-in-waiting. "camphor and boughs of cedar must be procured instantly, or the dreadful creature will eat up her highness's ermine robes!" but the little princess knew better than that.   on the fifth night she dreamed that she was a tiny white egg lying in a nest that a humming bird had hung to a spray of fern by a rope of twisted spider's web. the nest was softly and warmly lined with silky down, and above her was the soft warmth of the mother bird's breast. on the sixth night she was a snowflake. it was christmas night, and the towns and villages were gay. rosy light poured from every window, blurred by the falling snow, and the air was full of the sound of bells. high up on the mountain was a lonely wayside shrine with carved and painted wooden figures of the mother and her child whose birthday it was. there were no bells there, nor yellow candle light, but only snow and dark evergreen trees. the snowflake, whirling and dancing down from the sky, a tiny frosty star, gave its life as a birthday gift to the holy child, lying for its little moment in his outstretched hand. the angel was distressed to find, on the seventh night, that the seventh dream had slipped through a hole in the moonbeam basket and was lost. careless little angel! but it really did not matter, for instead of a dream, he showed himself to the princess. and she liked that the best of all, for she had never had any one to play with before, and there is no playmate equal to an angel. but the seventh dream is still drifting about the world – i wonder where? perhaps it will be upon my pillow to-night – perhaps upon yours. who knows? crack! went the driver's whip, but it did not hurt the galloping misty horses, for it was only a ribbon of rainbow that he liked to use because both he and his horses thought it so pretty. and away went the great coach, over the forests and over the seas, over the cities and plains, to a country where the sea thrusts long silver fingers into the land, where mountains are white with snow at the same time that the meadows are bright with wild flowers, and where in summer the sun never sets, and in winter it never rises. and here the dream coach drew up beside a cottage where a lonely little norwegian boy was falling asleep. "come, goran!" called the driver. "come, climb into the coach and find the dream i have brought for you!" who was goran? what dream did he find? that you shall hear. little goran and his grandmother lived in a tiny house in norway, high above the deep waters of a fjord. when goran was a baby they used to tie one end of a rope around his waist and the other to the door, so that if he toddled over the edge he could be hauled back like a fish on a line. but now he was no longer a baby, but a big boy, six years old, and he tried to take care of his grandmother as a big boy should. it was a lovely spot in summer, when the waterfalls went pouring down milk-white into the green fjord, sending up so much spray that they looked as if they were steaming hot; when rainbows hung in the sky; when the small steep meadows were bright with wild flowers, and even the sod roof of the cottage was like a little wild garden of harebells and pansies and strawberries that goran gathered for breakfast sometimes. he was happy all day then, fishing in the fjord, making a little cart for nanna, the goat, to pull, trying to teach gustava, the hen, to sing, putting on his fingers the pink and purple hats that he picked from the tall spires of wild foxglove and monkshood, and making them dance and bow, and listening to the loud music of the waterfalls after rain. and in the evening after supper goran's grandmother would tell him splendid stories while they sat together in the doorway making straw beehives, sewing the rounds of straw together with split blackberry briers. the sun would shine on the straw and make it look so yellow and glistening that goran would pretend he was making a golden beehive for the queen bee's palace. for where goran lived the sun never sets at all in the middle of summer, and it is bright daylight not only all day, but all night as well. you and i would never have known when to go to bed, but goran and his grandmother were used to it, and even gustava, the hen, knew enough to put her head under her wing and make her own dark night. but with winter, changes came. the flowers slept under the earth until spring's call should wake them, and yawning and stretching, s-t-r-e-t-c-h-i-n-g, they should stretch up into the air and sunlight. the waterfalls no longer flung up clouds of spray like smoke, but built roofs of ice over themselves. and, strangest of all, the winter darkness came, so that the days were like the nights, and you and i would never have known when to get up. "i must go to the village for our winter supplies before the snow falls and cuts us off," his grandmother said to goran one day. "neighbor skylstad has offered me a seat in his rowboat to-morrow, and will bring me back the next day. you won't be afraid to stay here alone, will you, goran?" "no, grandmother," said goran. he pretended to be tremendously interested in poking his finger into the earth in a geranium pot, so that his grandmother shouldn't see that his eyes were full of tears and his lower lip was trembling. for to tell you the truth he was frightened. the little house was so far from any other house, and then goran had never spent a night alone. last year when the winter's supplies were bought, he had gone to the village with his grandfather, and he had told nanna and gustava and mejau, the cat, all about what a wonderful place it was, a thousand times over; the warm shop, with its great cheeses in wooden boxes painted with bright birds and flowers, and its glowing stove, as tall and slim as a proud lady in a black dress, with a wreath of iron ferns upon her head; the other children who had let him play with them while grandfather exchanged the socks and mittens knitted by grandmother for potatoes and candles. and they had slept at the inn under a feather bed so heavy that you would have thought by morning they would have been pressed as flat as the flowers in grandmother's big bible. but they weren't! they got up just as round as ever, and had a wonderful breakfast of dark grayish-brown goats'-milk cheese, cold herring, and stewed bilberries. grandfather had gone to heaven since then, and goran wondered if he could possibly be finding it as delightful as the village. how he did want to go this time! but of course he knew that some one must stay behind to feed nanna and gustava and mejau, to tend the fire and water the geraniums and wind the clock. so he said as bravely as he could: "i'll take care of everything, grandmother."   soon after his grandmother left, the snow began to fall. how that frightened goran! suppose it snowed so hard that she could never get back to him! for when winter really began, the little house was often up to its chimney in snow, and they could get to no one, and no one could get to them. how poor little goran's heart began to hammer at the thought! he fell to work to make himself forget the snow. first, seizing a broom made of a bundle of twigs, he swept the hard earth floor, which in summer had so pretty a carpet of green leaves, strewn fresh every day by goran and his grandmother. then he poured some water on the geraniums in the window, only spilling a little on himself. then he stroked mejau, who was purring loudly in front of the fire; and all this made him feel much better. "time for dinner, goran!" said the old clock on the wall. at least it said: "ding! ding! ding! ding! ding! ding! ding! ding! ding! ding! ding! ding!" which meant the same thing. so goran ate the goats'-milk cheese and black bread that his grandmother had left for him; and then, and not before, he summoned up enough courage to look out to see if the snow was still falling. it was snowing harder than ever, and already everything had a deep fluffy covering. oh, would his grandmother ever be able to get back to him? but he must be brave, and not cry, for he was six years old. he said a little prayer, as his grandmother had taught him to do whenever he was frightened or unhappy, and his heavy heart grew lighter. "i'll make a snowman," goran decided. perhaps then the time would seem shorter. grandfather and he had made a splendid snowman after the first snowfall last winter. it was not late enough in the year to have the day as dark as night. it was only as dark as a deep winter twilight, and the white snow seemed to give out a light of its own for goran to work by. first he found an old broomstick and thrust it into the snow so that it stood upright. then he pushed the heavy wet snow around it, patting on here, scooping out there, until there was a body to hold the big snowball he rolled for the head. a bent twig pressed in made a pleasant smile, and for eyes goran ran indoors and took from the little box that held his treasures two marbles of sky-blue glass that his grandfather had given him once for his birthday. what a beautiful snowman! with his sky-blue eyes he gazed through the falling snow at little goran.   "ding! ding! ding! ding! ding!" called the old clock, and that was the same as saying: "time for supper, goran!" the fire lit up the room with a warm glow, painted the curtains crimson, and made wavering gigantic shadows on the walls. the water bubbled in the pot, and the boiling potatoes knocked against the lid. "prr-prrr!" said mejau, blinking in front of the blaze, and the old clock answered: "tock! tick! tock!" goran had given their supper to nanna and gustava and mejau, and had taken one good-night look at his snowman. now he put his bowl of boiled potatoes on the table in front of the fire, and pulled up his chair. lying on the floor where she had fallen from his box when he was getting his snowman's blue eyes was a playing card, the queen of clubs. his grandfather had found it lying in the road in the village, and had brought it home as a present for goran. the little boy thought the queen was very splendid, with her crown and her veil, and her red dress trimmed with bands of blue and leaves and stars and rising suns of yellow. in one hand she held on high a little yellow flower. now he picked her up and put her on a chair beside him, pretending the queen had come for supper to keep him from being lonely. each mouthful of potato he first offered her, with great politeness, but the delicate lady only gazed off into space. goran's supper made his insides feel as if a soft blanket had been tucked cozily about them, and he was warm and sleepy. "was there anything else grandmother told me to do before i went to bed?" he murmured. "tick! tock! yes, there was," the clock replied. "she told you to wind me up. climb on a chair and do it carefully. don't shake me. i can't stand that, for i'm not as young as i used to be." "and i want a drink!" cried the youngest geranium, who was little, and had been hidden by the bigger pots when goran watered them. knock, knock, knock! what a knocking at the door! goran ran to open it, and the firelight fell on nanna the goat and gustava the hen against a background of whirling snow. nanna was wearing grandmother's quilted jacket – where in the world had she found that? and gustava had wrapped goran's muffler about herself and the little basket she carried on her wing. "good evening!" began nanna, rather timidly for her. "may gustava and i come in and sit by the fire? we thought you might be lonely, and then it is so cold in the shed. i did have a muffler like gustava's, but i absent-mindedly ate it. i'm growing very absent-minded. we've come with an important message for you, but i can't remember what it is. can you, gustava?" "cluck! clu-uck! no, i can't. but i've brought my beautiful child to call on you," said gustava; and she lifted her wing and showed goran the brown egg in her basket. "shut the door! shut the door!" several geraniums called indignantly. "we are very delicate, and we shall catch our deaths of cold!" so in came nanna and gustava and gustava's egg, and goran shut the door. "present my subjects!" commanded the queen of clubs, and goran saw that she was no longer a little card, but a lady as big as his grandmother. in front she still wore her blue and red and yellow dress, but in back she was all blue, every inch of her, with a pattern of gilt stars, and when she turned sideways she seemed to vanish, for she was only as thick as cardboard. but she was so proud and grand that goran wished he had on his sunday suit, with the long black trousers and the short black jacket with its big silver buttons, the waistcoat all covered with needlework flowers, and the raspberry pink neckerchief. "this is nanna, our goat, your majesty," he said. "goat, you may kiss my hand," said the queen. "i don't know whether i want to," replied rude nanna, who had never been presented to a queen before, and didn't know the proper way to behave. "mercy on us! what manners!" cried the geraniums, blushing deep red that the queen should be spoken to in that manner, in what they thought of as their house. "but i wouldn't mind eating your yellow flower," continued nanna. "i like to eat flowers." and she looked at the geraniums, who nearly fainted. "your turn next," said the queen to gustava. she had heard gentlemen say that so often when they were playing skat with her and her companions that she always repeated it when she could think of nothing else to say. "squawk! cluck!" cried gustava. "would your majesty like to see my beautiful child?" and she showed the queen her egg. "just look, your majesty! have you ever seen anything more lovely? such a pale brown color! such an innocent expression! perhaps your majesty is also a mother?" "tick! tock! don't forget to wind me!" said the old clock. "gustava hen talks too much," the fat teapot in the corner cupboard told her daughters the teacups. "when the queen speaks to you, just say 'yes, your this is nanna, our goat, your majesty. majesty,' and 'no, your majesty,' and i dare say she will take you all to court and find you handsome husbands among the royal coffeecups." "your majesty should see my beautiful home," went on gustava. "a nest of pure gold!" (she thought it was gold, but it was really yellow straw.) "just like my throne," replied the queen. "speaking of beautiful homes, you should see my palace! there are fifty-three rooms!" (she said this because it was the highest number she knew, for there are fifty-three cards in the pack, counting the joker who keeps all the cards amused when they are shut up in their box. and she had seen a room in the palace, because she had been used in a game of skat there, once in her early youth. but that was long, long ago.) "my throne and the king's throne are pure gold, just like your nest, my good gustava. and the walls are painted red and white, in swirls, like strawberries and cream. the stove has such a tall slender figure, and wears a golden crown. and then, just imagine, all the lamps are dripping with icicles at the same time that the floor is covered with blooming roses!" (for that is how she thought of the glass lusters on the lamps and the carpet on the floor.) "icicles! ice! freezing! that reminds me of our important message!" cried nanna. "your snowman, goran. he looks so dreadfully cold out there, we were afraid he would perish." "oh, yes! how could we have forgotten for so long! cluck! cluck! cluck! he will certainly be frozen to death unless something is done quickly!" "do you mean to tell me that any one is out of doors on such a night as this?" questioned the queen. "have him brought in at once! your turn next!" and she looked so severely at goran that he felt his ears getting red. so goran and nanna brought the snowman in, while the queen gave orders from the doorway, gustava sat on her darling egg to keep it warm, mejau walked away with his tail as big as a bottle brush, and the geraniums cried in chorus: "shut the door! shut the door! we shall all catch cold!" the queen and the snowman. "poor thing! how pale he is!" exclaimed the queen. "and how dreadfully cold! put him in a chair by the fire!" the snowman looked out of wondering sky-blue glass eyes, but said never a word, for he was very shy; and as he had only been born that afternoon, everything in the world was new to him. "i want a drink!" cried the youngest geranium; and: "tick! tock! tick! don't forget to wind me!" the old clock repeated; but no one paid any attention to them. "your turn next!" said the queen to nanna. "make a blaze, for this poor creature is nearly frozen." so with a clatter of tiny hoofs, nanna built up the fire, only pausing to eat a twig or two, until even mejau was nearly roasted. but the poor snowman was worse instead of better. his twig mouth still smiled bravely, and his blue eyes remained wide open, but tears seemed to pour down his cheeks, and he was growing thinner before their very eyes. "if you please," he said in a timid voice, "i'm — " "give him a drink of something hot," advised the fat teapot, and that reminded the youngest geranium, who began screaming: "i want a drink! i want a drink! i want a drink!" "i'll be delighted to oblige with some nice warm milk," nanna offered, so goran milked a bowlful. but the snowman could not drink it, and the tears ran faster and faster down his face. "if you please – " he began again, faintly. "we must put him to bed," the queen interrupted, with a stern look at gustava who was sitting on her darling egg in the center of grandmother's feather bed. "your turn next!" grandmother's bed was built into the wall, like a cupboard. it was all carved with harebells and pine-cones and kobolds and nixies. the kobolds are the elves who live in the mountain forests, and the nixies are water fairies who sit under the waterfalls playing upon their harps and making the sweetest music in the world. there was a big white feather bed on grandmother's bed, and a big red feather bed on top of that, and two fat pillows stuffed with goose feathers. and above all this was a little shelf with two smaller feather beds and two smaller pillows, and that was goran's bed. on dreadfully cold nights they pulled two little wooden doors shut, and there they were, quite warm and cozy – even quite stuffy, you and i might think! the doors of the bed were painted with pink tulips and red hearts, and grandmother said it made her feel quite young and warm to look at them, and goran said it made him feel quite young and warm too. and gustava the hen thought they were beautiful, so there she sat on her darling egg, and as she could never think of more than one thing at a time, she had forgotten all about the snowman, and was happily clucking this song to her egg: "make a wreath, i beg,  for my darling egg! "flowers blue as cloudless sky  when the summer sun is high,  harebells, little cups of blue,  holding drops of crystal dew. "rain-wet pinks as sweet as spice,  lilies white as snow and ice,  lemon-colored lilies, too,  and the flax-flower's lovely blue. "strawberries sweet and red and small,  and the purple monkshood tall;  let the moon-white daisies shine,  bring the coral columbine. "weave the shining buttercup,  bind the sweet wild roses up;  poppies, red as coals of fire,  and the speckled foxglove spire. "and the iris blue that gleams  knee-deep in the foamy streams.  bring the spruce cones brown and long." (thus ran on gustava's song). "make a wreath, i beg,  for my darling egg!" "make a wreath, i beg,  for gustava's egg," broke in nanna the goat impatiently: "why leave the geraniums out?  add the teapot's broken spout,  cheese, and brown potatoes, too;  anything at all will do. "feathers from the feather bed,  goran's mittens, warm and red,  and the flower the queen holds up,  and the cracked blue china cup. "but the queen has said  kindly leave that bed!" so gustava had to flop off the bed with a squawk, while goran handed her her egg, and then they put the poor snowman, what was left of him, into grandmother's bed, and pulled the eiderdown quilts over him. "if you please," said the snowman in a feeble whisper, "oh, if you please, i'm — " "i know this is the right thing to do, because it is the way we always treat snowmen at the palace," broke in the queen. to tell you the truth, she had never seen a snowman in her life before, but she would never admit that she didn't know all about everything. the snowman looked at them with despairing sky-blue eyes, while his tears poured down, soaking grandmother's pillow. he had tried desperately to tell them something, but they would none of them listen. suddenly goran knew what it was. "i believe we're melting him," said goran. "he needs air." "i need air," said the snowman, his face shining with hope. "air?" said the queen. "nonsense! he's had too much air. he needs a hot brick at his feet!" "i need air," faltered the snowman. "air? nonsense!" cried the fat teapot and all her teacup daughters, hoping the queen would hear, and take them back to the palace with her. "i need air," sighed the snowman, and now he looked discouraged. "air? brrr-rrr!" and mejau squeezed himself under the chest of drawers, much annoyed with every one. "i need air," breathed the snowman, looking at goran with imploring eyes. "air? mercy on us, that will mean opening the door again!" and the geraniums shivered in every leaf and petal. but goran had helped the poor snowman, now nearly melted away, out of bed, and was leading him to the door. "i need – " whispered the snowman, and his voice was so faint that goran could hardly hear it. and there, because he was melting away so fast, his mouth fell out and lay on the floor, just a little bent twig. poor snowman! oh, poor snowman! he could not make a sound now – he could only look at them, so sadly, so sadly! but a little mouse peeping with bright eyes out of its hole saw what had happened, and, since mejau was nowhere in sight, ventured to squeak: "oh, please, ma'am! oh, please, sir! the poor gentleman's mouth is lying on the floor!" so the queen picked it up and pressed it into place again, but by mistake she put it on wrong side up, so that instead of a pleasant smile the snowman had the crossest mouth in the world, pulled far down at each corner. and what a change it made in him! before, his voice had been a gentle whisper – now it was an angry bellow that made the teacups shiver on their shelf and the geraniums turn quite pale, and the little mouse dive back with a squeak into her hole, thinking to herself: "well, i never!" "here, you!" shouted the snowman. "get me out of here, and get me out quick. hop along, my girl, and open the door! your turn next!" (this was to the astonished queen.) "now, then, carry me out!" "tick! tock! i'm feeling dreadfully run down," said the old clock. "tick! tock!  wind the clock!  tock! tick!  wind it quick! "tick – tock" and he stopped talking. the astonished queen meekly threw open the door, and goran carried the snowman into the snowy darkness. brr-rr! it was bitter cold! "now bring some snow and build me up," the snow- man ordered. "leave the door open so that you can see – don't dawdle!" the firelight from the open door shone on his blue glass eyes, and made two angry red sparks gleam in them. goran and the queen, gustava and nanna, scooped up handfuls and hoof-fuls and wing-fuls of newly fallen snow, and patted it on to the snowman until he stuck out his chest more proudly than he had done in the first place, and he was so fat that he looked as if he were wearing six white fur coats, one on top of another. and all the time when he wasn't frightening the queen half out of her wits by shouting: "your turn next!" he kept muttering away to himself: "melt me over the fire! smother me in a feather bed! put a hot brick at my feet!" it was when goran was patting a little fresh snow on the snowman's nose that he accidentally knocked his twig mouth off again. and this time it was put back right side up, so that the snowman was as smiling as he had been in the beginning. he stopped roaring. he stopped muttering. did the fire die down? for the red sparks no longer gleamed in his gentle sky-blue eyes. "oh, thank you so much!" said the snowman. "you have been so kind to me! and i know that you were trying to help me in the house. forgive me for having been so cross! will you please forgive me?" and the snowman looked so anxiously at goran and the queen and nanna and gustava that they all answered: "yes, yes, of course we will! and will you please forgive us for nearly melting you?" "and now go in, for this lovely air is cold for you, i know." "oh, it is bitter cold!" agreed the queen. "brr-rrr! it is bitter cold."   brr-rr! it was bitter cold! goran rubbed his eyes. only a few red embers glowed in the fireplace. how stiff he was! he must have slept in his chair all night, but he could not tell how late it was, for the clock had stopped. he had forgotten to wind it, he remembered now. there sat the queen in her chair, but she was just a little card again. then he remembered the snowman. he ran out of doors. there the snowman stood, as roly-poly as ever, with his twig mouth smiling and his sky-blue eyes wide open. he said nothing, but goran felt they two understood each other. what a night it had been! could it all have been a dream? but now the night was over, and the storm was over; and, best of all, through the dim twilight he saw on the fjord far below him neighbor skylstad's rowboat, and seated in it, wrapped in her red shawl, his own dear grandmother coming home to him. the driver of the dream coach paused as he turned over the pages of the great white and gold book in which are kept the names of all those who have ridden or are to be given rides in the brightly painted coach. "i see," he said, addressing the little angels who helped him keep these records, "i see the name of the little chinese emperor. and there is a cross opposite his name. has he been naughty?" he asked. "has he been picking the sacred lotus flowers of his honorable ancestors? has he – ?" "oh, please," interrupted one of the smallest angels, "i put that cross there to remind me to tell you something about the little emperor. you see he hasn't been naughty — not exactly – but he's made a mistake. he doesn't understand," said the smallest angel, with his eyes round and serious. "and can i help the little emperor understand?" asked the driver of the dream coach. "of course you can!" cried the smallest angel, beaming brightly. "it's this way. the little chinese emperor has a friend of mine fastened up in a cage, where he is very sad – " "an angel in a cage?" asked the driver. "i never heard of such a thing!" "well, not exactly an angel, a – " but what it was, and how the driver helped the little angel's friend – that you shall hear. the little emperor was dreadfully bored. he yawned so that his round little face, as round and yellow as a full moon, grew quite long, and his nose wrinkled up into soft yellow creases, like cream that is being pushed back by the skimmer from the top of a bowl of milk. his slanting black eyes shut up tight, and when they opened they were so full of tears that they sparkled like blackthorn berries wet with rain. "oh, dear! oh, dear!" cried his aunt, princess autumn cloud. "the little emperor is bored! what shall we do, oh, what shall we do to amuse him? for when he is bored, he very soon grows naughty, and when he is naughty – oh, dear!" and she began to cry. but then she was always crying. when she was born her father and mother named her bright yellow butterfly floating in the sunshine, but she cried so much that by the time she was five years old they saw that name wouldn't do at all, and changed it to autumn cloud pouring down rain upon the sad gray sea. she cried about anything. if her lady-in-waiting brought her a bowl of tea with honeysuckle blossoms in it, she would cry because they weren't jasmine flowers. if they were jasmine, she would cry because they weren't honeysuckle. when the peach trees bloomed she would cry because that meant that spring had come, and that meant summer would soon follow, and then autumn, and then the cold winter. "and oh, how cold the wind will be then, and how fast the snow will fall!" sobbed princess autumn cloud, looking through her tears at the bright pink peach blossoms. she cried because her sea-green jacket was embroidered with storks instead of bamboo trees. she cried because they brought her shark-fin soup in a bowl of green lacquer with a gold dragon twisting around it, instead of a red lacquer bowl with a silver dragon. she cried if the weather was hot. she cried if the weather was cold. and hardest of all she cried whenever the little emperor was naughty. whenever she began to cry a lady-in-waiting knelt in front of her and caught her tears in a golden bowl, for it never would have done to let them run down her cheeks, like an ordinary person's tears; they would have washed such deep roads through the thick white powder on her face. every morning princess autumn cloud (and, indeed, every lady in the court of the little emperor) covered her face with honey in which white jasmine petals had been crushed to make it smell sweet, then when she was all sticky she put on powder until her face was as white as an egg. then she painted on very surprised-looking black eyebrows and a little mouth as red and round as a blob of sealing wax. it looked just as if her mouth were an important letter that had to be sealed up to keep all sorts of secrets from escaping. princess autumn cloud and princess gentle breeze and lady gleaming dragonfly and lady moon seen through the mist and all the rest of them would have thought it as shocking to appear without paint and powder covering up their faces as they would have thought it to appear without any clothes. so princess autumn cloud leaned over as if she were making a deep bow, and let her tears fall in a golden bowl, and then, because they were royal tears, they were poured into beautiful porcelain bottles that were sealed up and placed, rows and rows and rows of them, in a room all hung with silk curtains embroidered with weeping willows. "oh, what shall be done to amuse the little emperor?" sobbed princess autumn cloud. "perhaps he would like some music!" and she clapped her hands, with their long, long fingernails covered with gold fingernail protectors. so four fat musicians, dressed in vermilion silk and wearing big horn-rimmed spectacles to show how wise they were, came and kowtowed to the little emperor. that is, they got down on their knees, which was hard for them to do because they were so fat, and then, all together, knocked their heads on the floor nine times apiece to show their deep respect. then one beat on a drum, boom boom, and one clashed cymbals of brass together, crash bang, and one rang little bells of green and milk-white jade, and the oldest and fattest beat with mallets up and down the back of a musical instrument carved and painted to look like a life-sized tiger with glaring eyes and sharp white teeth. the little emperor sprawled back in his big dragon throne under the softly waving peacock feather fans, four fat chinese musicians. stretched out his arms and legs, and yawned harder than ever. "oh! oh! oh! what shall be done to amuse him?" wailed princess autumn cloud, bursting into tears afresh. "can no one suggest anything?" and although the mandarins and the court ladies thought to themselves that what they would really like to suggest for such a spoiled little boy would be to send him to bed without his supper, they none of them dared say so, but tried to look very solemn and sympathetic. "would the little old ancestor enjoy some sweetmeats?" suggested lady lotus blossom. "old ancestor" is what you call the emperor if you are properly brought up, and polite, and chinese. so gentlemen-in-waiting came and kowtowed and offered the little emperor lacquered boxes of crystallized ginger, of sugared sunflower seeds, and of litchi nuts. but do you think he was interested? not at all. he would not even look at them. "the wind is blowing hard. would it amuse the little old ancestor to watch the kites fly?" asked old lord mighty swishing dragon's tail. the little emperor didn't know whether it would or not. however, he couldn't be more bored than he was already, so he climbed down from his throne and went out into the windy autumn garden. first marched the musicians, beating on drums to let every one know that the emperor was coming. then came the court ladies tottering along on their "golden lilies," which is what they call their tiny feet that have been bound up tightly to keep them small ever since the ladies were babies. then the mandarins with their long pigtails and their padded silk coats whose big sleeves held fans and tobacco and bags of betel nuts and sheets of pale green and vermilion writing paper. then princess autumn cloud in a jade green gown embroidered with a hundred lilac butterflies, a lilac jacket, and pale rose-colored trousers tied with lilac ribbons. in her ears, around her arms, and on her fingers were jade and pearls, and her rose-colored shoes were trimmed with tassels of pearls and were so tiny that she could hardly hobble. in her shiny black hair she wore on one side a big peony, the petals made of mother-of-pearl and the leaves of jade. each petal and leaf was on a fine wire so that when she moved her head they trembled as real flowers do when the wind blows over them. on the other side were two jade butterflies that trembled too. in front of her, walking backward, went her lady-in-waiting holding the golden tear bowl, in case the princess should suddenly begin to cry. and last of all, surrounded by his gentlemen-in-waiting, came the little emperor, dressed from head to foot in yellow, the imperial color, so that he looked like a yellow baby duckling. and as he came every one in the palace and in the garden had to stop whatever they were doing – gossiping, teasing the royal monkeys, chewing betel nuts, or sweeping up dead leaves – and kneel down and knock their heads on the ground until he had passed. how the wind was blowing! it sent the willow branches streaming, it wrinkled the lake water and turned the lotus leaves wrong side out, it scattered the petals of the chrysanthemums. it tossed the kites high in the air. how brightly their colors shone against the gray sky! some were made to look like pink and yellow melons with trailing leaves, some were like warriors in vermilion, some were golden fish, others were black bats, and the biggest one of all was a great blue-green dragon. as for the little emperor, he took one look at them and then yawned so hard that they were afraid he would dislocate his jaw. a little brown bird the color of a dead leaf had been hopping about on the ground under the chrysanthemums looking for something for its supper, and now suddenly flew up into a willow tree and began to sing.   the little emperor clapped his hands, and all his servants dropped on their knees and began to kowtow. "catch me that little brown bird with the beautiful song!" he said. he stopped yawning, and his eyes grew bright with eagerness. "but, little old ancestor, that is such a plain little bird," said his aunt timidly. "surely you would rather have a cockatoo as pink as a cloud at dawn, or a pair of lovebirds as green as leaves in spring – " the rude little emperor paid not the slightest attention to her, but stamped his foot and shouted: "catch me that little brown bird!" so his servants chased the poor little fluttering bird with butterfly nets. the wind whipped their bright silk skirts, and their pigtails streamed out behind, and they puffed and panted, for they were most of them very fat. and at last the bird was caught, and put in a cage trimmed with tassels of purple silk and pearls, with his servants chased the bird with butterfly nets. drinking cup and seed cup made like the halves of plums from purple amethysts on brown amber twigs with green jade leaves. for a time the little emperor was delighted with his new pet, and every day he carried it in its cage when he went for a walk. but it never sang, only beat against the bars of its cage, or huddled on its perch, so presently he grew tired of it, and it was hung up in its cage in a dark corner of one of the palace rooms, where he soon forgot all about it.   how could the little bird sing? it was sick for the wide blue roads of the air, for wet green rice fields where the coolies stand with bare legs, sky-blue shirts, and bamboo hats as big as umbrellas, for the yellow rivers, and the mountains bright with red lilies. how could it sing in a cage? but sometimes it tried to cry to them: "let me out! please, please let me out! i have never done anything to harm you! i am so unhappy i think my heart is breaking! please let me go free!" "what a sweet song!" everybody would say. "run and tell the little emperor that his bird is singing again." after a while the little bird realized that they did not understand, and it tried no longer, but drooped, dull-eyed and silent, in its cage.   one night the little emperor had a dream. perhaps you won't wonder when i tell you what he had for supper. first he had tea in a bowl of jade as round and white as the moon, heaped up with honeysuckle flowers. then, in yellow lacquer boxes, sugared seeds, sunflower and lotus flower and watermelon seeds, boiled walnuts, and lotus buds. then velvety golden peaches and purple plums with a bloom of silver on them. pork cooked in eleven different ways: chopped, cold, with red beans and with white beans, with bamboo shoots, with onions, and with cherries, with eggs, with mushrooms, with cabbage, and with turnips. ducks and chickens stuffed with pine needles and roasted. smoked fish. shrimps and crabs, fried together. shark fins. boiled birds' nests. porridge of tiny yellow seeds like bird seed. cakes in the shapes of seashells, fish, dragons, butterflies, and flowers. chrysanthemum soup, steaming in a yellow bowl with a green dragon twisting around it. not one other thing did that poor little emperor have for his supper! when he was so full that he couldn't hold anything more, not even one sugared watermelon seed, they took off his silk napkin embroidered with little brown monkeys eating pink and orange persimmons. he was so sleepy that he did not even stamp his feet when they washed his face and hands. then they took off his red silk gown embroidered with gold dragons and blue clouds and lined with soft gray fur, his yellow silk shirt and his red satin shoes with their thick white soles. but he went to bed in his pale yellow pantaloons, tied around the ankles with rose-colored ribbons. i must tell you about his bed. it was made of brick, and inside of it a small fire was built to keep the little emperor warm. on top of this three yellow silk mattresses were placed, then silk sheets, red, yellow, green, blue, and violet, then a coverlet of yellow satin embroidered with stars. under his head were pillows stuffed with tea leaves; and above him was a canopy of yellow silk, embroidered with a great round moon whose golden rays streamed down the yellow silk curtains drawn around him. he fell asleep, and this is what he dreamed.   the long golden rays seemed to turn into the bars of a cage. yes, he was in a huge cage! he tried frantically to get out! he beat against the bars! then he saw what looked like the roots of trees, and brown tree trunks, a grove all around the cage. but the trees moved and stepped about, and, looking up the trunks, instead of leaves he saw feathers, and still farther, sharp beaks, and then bright eyes looking at him. they were birds! what he had thought were the roots of trees were their claws, and the trunks of the trees were their legs. but what enormous birds! they were as big as men, while he was as small as a bird. "let me out!" he shouted. "don't you know i am the emperor, and every one must obey me? let me out, i say!" "ah, he is beginning to sing," said one bird to another. "not a very musical song. too shrill by far! take my advice, wring his neck and roast him. he would make a tender, juicy morsel for our supper." "please, please let me out!" "oh, let me out! please, please let me out!" cried the poor little emperor in terror. "he is singing more sweetly now," remarked one of the birds. "too loud! quite ear-splitting!" said a lady bird, fluffing out her breast feathers and lifting her wings to show how sensitive she was. "if he were mine i should pluck him. his little yellow silk trousers would line my nest so softly." "oh, please, please set me free!" "really, his song is growing quite charming! but one can't stand listening to it all day." and with a great whir and flap and rustle of wings the birds flew away and left him in his cage, alone. he called for help and threw himself against the bars until he was exhausted. then bruised, panting, his heart nearly breaking out of his body, he lay on the floor of the cage. finally, growing hungry and thirsty, he looked in his seed and water cups, drank a little lukewarm water, and ate a dry bread crumb. now and then birds came and looked at him. some of them tried to catch his pigtail with their beaks or claws.   next day the little emperor was thoughtful. could it be, he wondered, that a little bird's nest was as dear to it as his own bed with its rainbow coverlets and its moon and stars was to him? that a little bird liked ripe berries and cold brook water as much as he liked ripe peaches and tea with jasmine flowers? that a little bird was as frightened when he tried to catch its tail in his fingers as he was when the birds tried to catch his pigtail? and then he thought of how he had felt when the lady bird had wanted his pantaloons to line her nest, and, hot with shame, he remembered his glistening jewel-bright blue cloak made of thousands of kingfishers' feathers. it had made him miserable to think of their taking his clothes, but suppose his clothes grew on him as their feathers did on them? how would he have felt then, hearing the bird say: "i should pluck him. his little silk trousers would line my nest so softly"? he went to bed thinking about his little brown bird, and before he shut his eyes he made up his mind to set it free in the morning.   then he fell asleep, and once again he dreamed that he was in the golden cage. whir-rr! one of the great birds flew down by the cage door. with his claw he unfastened it – opened it! oh, how exciting! the little emperor tore out, so afraid he would be stopped and put back in the cage! oh, how he ran across the room and through the open door! free! he was free! tears rushed to his eyes, and his heart felt as if it would burst with happiness. but it was winter. the garden was deep in snow that was falling as if it would never stop. the peaches and plums were gone, and the lotus pond was frozen hard as stone. the little emperor had never been out in the snow before except when he was dressed in his warm padded clothes, with one gentleman-in-waiting carrying his porcelain stove, and another bringing tea, and a third with cakes in a box of yellow lacquer, and a fourth holding between the snowflakes and the imperial head a great, moss-green umbrella. so small and helpless in so big and cold a world, what could a little boy find to eat or drink? where could he warm himself? he ran frantically through the snow. the rose-colored ribbons that tied his pantaloons came untied and trailed behind him, and the cold snow went up his bare legs. pausing to catch his sobbing breath, he looked up to see the thick snow sliding from a pine tree branch, and jumped aside just in time to keep from being buried beneath it. then on he plunged again, growing with each step more weak and cold and hungry; stopping now and then to call for help in a quavering voice that grew feebler every time; blinking back the tears that froze on his lashes as he tried to remember that emperors must never cry; then struggling on through the blinding snow, a little boy lost and alone. then, as it began to grow dark, he saw two great lanterns shining through the snow, coming slowly nearer. perhaps his aunt and his chief gentleman-in-waiting, lord mighty swishing dragon's tail (lord dragon tail, for short) had missed him and had come with lanterns to look for him! he tried to go toward them, to call, but he was too exhausted to move or make a sound. and then, imagine his terror when he realized that the glowing green lights were not lanterns at all, but the eyes of a great crouching animal – a cat! gathering all his strength for one last desperate effort, he tried to run. but with a leap the cat was after him, and with a paw now rolled into a velvet ball, now unsheathing sharp curved claws, tapped him first on one side, then on the other, nearly let him go, caught him again with one bound, and with a harder blow sent him spinning into stars and darkness.   some one was shaking him. was it the cat? the little emperor opened his eyes and saw the frightened face of princess autumn cloud bending over him, as yellow as a lemon, for she had jumped up out of bed when she heard him cry out in his sleep, and there hadn't been time to put on the honey and the powder, to paint on the surprised black eyebrows or the round red mouth. "wake up, wake up, little old ancestor!" she was crying as she shook him. "you're having a bad dream!" "aren't you the cat?" asked the little emperor, who wasn't really awake yet. "certainly not, little old ancestor!" replied his aunt, rather offended. the little emperor climbed out of his bed. the room was full of the still white light that comes from snow, and looking out of the window he saw that the plum trees and the cherry trees looked as if they had blossomed in the night, the snow lay so white and light on every twig. softly the snow fell, deep, deep it lay, and the people who passed by his windows went as silently as though they were shod in white velvet. the little emperor thought of his dream, and decided that his little bird might suffer and die if he let it go free before winter was over. but he explained to the bird, and tried to make it happier. "when summer comes, you shall fly away into the sky," he told it. he brought it fruit and green leaves to peck at, talking to it gently. and the little bird seemed to understand. the dull eyes grew brighter; and though it never sang it sometimes chirped as if it were trying to say: "thank you."   on the first night of summer when the moon lay like a great round pearl in the deep blue sea of the sky, the little emperor slept, and dreamed again that the cage door opened for him and let him go free. but oh, what happiness now, happiness almost too great for a little boy to bear. peonies were in bloom, each petal like a big seashell, and blue butterflies floated over them in the warm sunshine. half hidden in the grass the little emperor found a great purple fruit – a mulberry. how good it was! the dewy spider webs glistened like the great tinsel bridge to heaven they built for him on every birthday. how happy he was! how happy! free and safe! with the sun to warm him and the breeze to cool him; with food tumbling down from heaven or the mulberry trees, he wasn't sure which, with a crystal clear dewdrop to drink on every blade of grass. how happy he was! the lake was full of great rustling leaves and big pink lotus flowers. venturing out on one of the leaves, he paddled his feet over its edge in the gently lapping water. then, climbing into one of the pink blossoms, he lay, so happy, so happy, looking up at the blue-green dragon flies darting overhead, and rocking gently in his rosy boat. no, it was not the lotus flower that rocked him on the water. it was princess autumn cloud who was gently shaking him, and saying: "wake up, if you please, dear little old ancestor!" and hard as it is to believe, she was really smiling. the little emperor had been so good lately, and then it was such a beautiful day! he could not wait until after breakfast to let his little brown bird go free. as soon as he was dressed he ran as fast as he could to the room where the bird cage hung. pat-a-pat-pat went his little feet in their blue satin shoes, and thud, thud! puff, puff! his fat old gentlemen-in-waiting lumbered along behind him. "i've come to set you free!" he whispered, as he carried the cage with its tassels of purple and pearls out into the beautiful day. for one minute he wanted to cry, for he had grown to love the little bird. but he remembered again that emperors must not cry. he opened the door of the cage. "little old ancestor's bird has flown away!" cried the mandarins. "it has flown so high in the sky that we can hardly see it," the court ladies answered; and they all wished that the little emperor would stop gazing up into the sky at the little dark speck, so that they might go in and have their breakfasts. but the little emperor, the empty bird cage in his hand, still looked up. high, high in the sky! and now, really, he could no longer see it. but a thread of song dropped down to him, a silver thread of song, a golden thread of love between the hearts of a little bird and a little boy. "thank you, oh, thank you, my little emperor!" up into the sky rose the hundred horses and their great coach, until the roof of the little emperor's palace with its bright yellow tiles looked only as big as a yellow autumn leaf – as a jasmine petal – as nothing at all! and along the road of stars they galloped, while notes of music sprayed from the wheels of the coach, and, dropping to earth, gave the nightingales ideas for beautiful new songs. on through the sky and above the earth until the night was over, and at last, instead of a road, the hundred horses were galloping along a river. all along the river bank tall poplars rustled and whispered in the wind of the coach's passing, and little waves, stirred up by the horses' hoofs, slapped against the small houses that rose from the water, small pink houses and blue houses and white red-roofed houses, each with its rowboat tied to its steps. white swans and green ducks rocked on the ripples, their feathers gilded by sunshine, for it was bright day now, and the rain that had been pouring down had stopped. it was bright day, and yet no one saw the dream coach except a little french boy, whose eyes were falling shut in one little pink cottage. "philippe! philippe!" the driver called. "one last dream is left for you!" what was philippe's dream? that you shall hear. "hold still then, my little monkey!" "but mother," wailed philippe, "i have the soap in my eye!" "soap is it, my angel?" asked his mother, lifting his face in her two wet hands. "oh, but there is really no soap at all to speak about, just a bubble or two of suds. there!" and with the corner of her apron she wiped away the thick white lather around his eyelashes, so that philippe looked like a little boy made of snow, except for his eyes which were large and brown and filled with tears from the painful smarting. from head to ankles he was covered with a froth of soapsuds, and his feet had stirred the warm water in the bottom of the wooden tub into rainbow-tinted mounds of bubbles which grew and grew and cascaded over the sides with a tiny fizzing sound. "you are giving our young one a very thorough tubbing," remarked philippe's father. he was sitting under the narrow window of their cottage, cutting the yellow-white sprouts from a bag of potatoes which he intended to plant in the dark of the next moon. "indeed i am. i shall scrub and rub and polish until he looks like a wax image, or as pink and shining as the inside of the seashell his uncle pablôt sent him from paimpol." philippe's father held a large brown potato at arm's length, and, regarding it with his head cocked to one side, said: "very fine! yes, very fine!" "a good size," agreed his wife, looking over her shoulder, while she absently bored into the ear of her long-suffering son with a bit of soapy rag. "yes – but i was thinking rather of philippe's uncle pablôt. it is he who is very fine, a grand gentleman who carries a gold-headed cane and has traveled far – to the very borders of our beloved france, and even beyond, so i hear." "oh, very much beyond! he has been in every country in the world, according to the wonderful stories he tells, and the world, pierre, i understand to be of a tremendous bigness; indeed, if what i am told is the truth, it must be three or four times as big as our own country!" "is that so?" replied pierre doubtfully, starting to cut the pallid sprouts again with quick motions of his work-hardened hands. "it may all be the truth, my good wife, but i have always taken the words of pablôt with a grain of salt; i think, for that matter, that he is a little inclined to blow." "'blow'?" asked philippe from his tub. "i thought it was only the wind that could blow." but of course no one answered him, for he was only a little boy, and not expected to understand; instead, his father bent over his bag of potatoes to hide his smile, and his mother remembered that the pot-au-feu (which is a thick soup made of odds and ends and bits and scraps and almost everything you can think of mixed with water in a large pot and left on the fire to bubble sluggishly for many hours) needed stirring right away. "take care," warned her husband, "that you do not drop soap into the soup from your wet hands, for i know of nothing that gives it a more curious flavor." "just the same," said philippe's mother, turning from the hearth, her cheeks flushed rosy red by the bright, hot embers, "just the same, it is a good thing that our little one should be invited to meet such a fine gentleman. it will teach him how to say the most ordinary thing elegantly, and how to carry his head high as if he were a born dandy. philippe, repeat to your father the little speech you are to say when you meet your uncle." "good health to you, my dear and illustrated uncle! it gives — " "no, no, my pet, 'my dear and illustrious uncle,' and was there not something that you forgot?" "yes, mother. i forgot to make my bow. shall i make a new beginning?" "do so." whereupon philippe bent nearly double over the edge of the tub, scattering drops of water upon the floor. "good health to you, my dear and illustrious uncle. it gives me the most great pleasure to have – eugh! soap in my mouth. . . . ptu! – " "wait, then, until you are dressed in the new suit i have sewn for you," and his mother, taking an earthen jar of water from the side of the fire where it had been put to warm, poured it over his head, leaving him no longer a snow boy, but a boy made of the shiniest china you can imagine. "is that pleasant, my brave one?" "it is warm, like rain," said philippe, lifting his arms above his head. "i will not need another washing for a long, long time, will i, mother?" philippe's grandparents lived the distance of twelve fields, a small woods, three stiles, and the width of a brook from his own home. just how far that is, is hard to say. you see it makes such a difference whom you ask. ask the swallows and they will tell you airily that it is no distance at all, just a flick of the wing, and you are there. but ask the snails who live under the broad leaves of the flowering mulleins, and after pondering a long time, they will tell you that it gives them a headache to think of such a tremendous distance, that it would surely take several lifetimes to travel so far, and as for themselves, they would consider it very foolish to start out on such a dangerous adventure when there were plenty of young lettuces so close at hand! to a small boy of eight, it was quite a long journey, taken alone, particularly when he could not take the short cut by wriggling through the tangled copse for fear of tearing his new suit, or being covered with last year's burrs and barbed seeds of the undergrowth. but he reached his grandparents' house at last. it was a little house built by the side of a river, actually touching the water on one side, so that you could step out of a door, down a step, and into a rowboat. and there were white swans and yellow-breasted ducks with bronze-green backs swimming in the reflection of the pink walls. on the land side was a poplar tree, very tall and dressed in silvery blue leaves, stand- ing erect like a giant soldier on guard before a toy house. once philippe's grandfather had explained to him how he could tell the time of day by the shadow this tree cast: when it struck across the chimney at the corner of the house, it was time to go into the fields; when it crossed the front door, it was time to enter therein for the midday meal, and when it pointed out toward the fields, that was a signal for grandmother to ring the great bell that would call the workers home. "and what," philippe had asked, "do you do, grandfather, when the sun is under the clouds, and there is no shadow to tell the time?" "well, then we must needs look at the clock which ticks on the mantelshelf over the fire," grandfather said with a twinkle of his old, blue eyes, eyes half hidden by the tufts of white eyebrows. although the day had commenced unusually fine, and the calm, blue sea of sky had been without an island reef or bar of cloud to wreck the golden galleon of the sun, by the time philippe had been tubbed, scrubbed, dressed in his best, had been rehearsed in his address to his uncle, kissed good-by, and given a little nosegay of pansies and lilies of the valley in a paper twist for his grandmother, and had crossed the twelve fields and picked his way carefully through the woods to avoid the sharp brambles that reached out after him with long and sinuous arms – by the time all this had come to pass, and philippe was actually in sight of his grandparents' cottage, it began to rain from a sky as heavily gray as it had been brightly blue before. it started so suddenly that philippe had to run across the last field to keep the big drops from ruining his new black velvet cap. the inside of the house was very dark, with only two windows, like half-closed eyes, looking out on the world. through these windows entered shafts of pale, watery light that cut blue paths in the wreaths of wood smoke creeping around the rafters. pots, pans, and kettles of burnished copper hung from hooks in the ceiling, and mirrored in tiny points the flames leaping on the hearth. it was like another world, small but complete, inside grandmother's and grandfather's house: the floor was the earth itself, trampled until it was as hard as brick, the wreaths of smoke were thin clouds flung across a dark sky where yellow and red stars winked and twinkled. at one end of the room, where grandmother and anjou, the cat, were busy preparing dinner over the bright fire, it was gay and warm: day; but at the farther end, where grand- father sat stroking his long white beard, it was dark and chilly: night. when philippe entered, he had to blink his eyes for some time before he could adjust himself to the darkness. then he handed his grandmother the bouquet he had carried so carefully, politely wishing her health and happiness. there were tears in grandmother's eyes as she bent over and kissed her grandson's pink and shining cheek, but then there were always tears in grandmother's eyes – why, philippe never could understand. did she weep because of the stinging smoke that the chimney seemed too small to carry off? or because she was sad? not sad, thought philippe, or grandmother would not be all the time smiling. "hey-o!" sang grandmother in her high little voice, dropping a tear in the yellow heart of a purple pansy. "what pretty flowers you have brought me, my philippe, and see, here is a raindrop in one of them shining as prettily as a glass bead!" philippe did not like to tell her that it was her own tear. "then it is raining out?" she asked. "it will make a wet home-coming for your uncle, but it is lovely, nevertheless, and if it comes down hard enough, it will make the river flow along more happily than it has for a long day. won't that be beautiful, philippe?" "yes, grandmother marianne," philippe agreed politely, and then asked: "when will my uncle pablôt be here? mother has taught me what to say when i make my bow to him, and if he is too long in coming, i am afraid that i may forget it." "he will come," said grandmother, "when he has a mind to." "and is he coming from a great distance, maybe all the way from paris?" (philippe thought that paris was the only city in the world, built on the world's very edge.) "maybe, and then maybe not," grandmother told him. "there is no telling where your uncle will come from; he is apt to blow in from any quarter." "ah, then that explains it!" remarked philippe innocently. "father said he always thought uncle pablôt was a little inclined to blow." "now did he!" grandmother was frowning and smiling at one and the same time. "have you spoken to your grandfather yet?" "i did not know that grandfather joseph was home; i did not see him," said philippe truthfully. "use your young eyes sharply and look into every corner," advised grandmother. "anjou!" she cried warningly, "you will burn your nose if you get too close to that roasting duck." philippe gazed into the farthest corner of the room where he saw two dim spots of white glowing like snow in the night; he had to advance quite near before he could be sure that what he saw was the long white hair and the long white beard of grandfather. "good day, grandfather joseph," said philippe, bowing low before the old man who sat huddled in a chair, the arms of which were worn shiny by the grip of thin fingers. "'good day'? a very bad day, grandson. though i no longer hear nor see as i used to, i can feel that it is raining. tell me, is it raining?" "yes, grandfather," replied philippe from the top of a churn where he had climbed to look out of the small window at the river. "it is falling so hard that the raindrops are bouncing from the surface of the water." remembering what his grandmother had told him, he added, "it will make the river flow along more happily than it has for a long time, and that will be very beautiful!" "horrible!" said grandfather with a sigh that was almost too soft to be heard. "it makes me feel weak clear through," he continued. "give me the sharp cold and the sparkling frost when the river freezes so hard that it cracks and roars like a cannon. when i was a boy, i used to spread my cape and let the wind push me across the slippery ice — this soft weather will be the end of me!" there were three people living in the house that philippe visited; besides grandmother and grandfather, there was little avril, their grandniece, and therefore philippe's cousin. avril was a child of tender beauty, younger than philippe, quite a baby in the sight of eyes that were eight long years old. avril was very shy, so shy that she had hidden under the table when philippe had entered the door, and it was not until he had paid his respects to grandmother and grandfather that he saw her there, peeking out at him like a flower from the dark shadow of a garden wall. "hello, my little cousin," said philippe with a grand and grown-up air. "would you like to play a very important game with me that i have just thought of?" avril laughed her pleasure. it was a most excellent game, so philippe thought. he was king, enthroned on the churn, and avril was his slave, and had to bring him anything he might request, with the penalty of having her head chopped off if she failed. king philippe had just commanded the brightest star in the heavens to be brought him, when there was all at once a loud rapping and rattling of the wooden latch. the door flew open before anyone had time to answer, and a gust of chilly wind swept through the room, breaking the weaving rings of smoke, making the fire leap up the chimney, causing grandmother in her excitement to drop the wooden spoon into the pudding, and even waving grandfather's beard like a white flag. "behold! i am here!" cried uncle pablôt from the threshold, withdrawing his right arm from the voluminous folds of his cape and making a magnificent sweeping gesture ending with his fingertips being pressed lightly against his expanded chest. "so i see," said grandfather in a thin, complaining voice from his dark corner. "close the door," he pleaded, tucking the end of his waving beard into his blue smock. "close the door – the rain makes me feel very weak – " but no one paid the least bit of attention to him. grandmother ran forward with squeaking noises of delight, throwing her arms around the newcomer, draping him with a link of sausage, which she had forgotten to put down in her hurry, in the manner of a necklace. avril shyly retreated beneath the table again, and philippe tried desperately to remember the pretty sentences with which he was to address the great man. he was in the very middle of trying to remember when his grandmother took him by the hand. "and here is your little nephew," said grandmother, "who has come all by himself a great distance to welcome you." philippe stared dumbly, wishing that he had had the presence of mind to slip under the table with avril. "come! what do you say to your uncle, philippe?" asked grandmother. "i forget what i say," answered philippe miserably, "but i am very glad to see you, my – my — ah! now it comes to me!" and he started again: "good health to you, my dear and illustrious uncle. it gives me the most — " "fiddlesticks!" interposed uncle pablôt, laughing. " – the most great pleasure to welcome you, and — " "yes, yes – " said uncle pablôt, cutting him short again. "but what do you say to this?" and he reached into the folds of his cape and handed philippe something small and shining. "what is it?" asked philippe. "ho! that is better. at least you did not learn that by heart, did you, my boy? here, i will show you." whereupon he put the bright present to his lips and blew a shrill blast that rattled the pots and made grandmother drop her sausages in alarm. (she dusted them very carefully before putting them in the hot pan that was waiting to cook them.) "a whistle!" shouted philippe, dancing with joy. then he ducked under the table to show his beautiful new present to avril. "and here is a present for the other little one," said uncle pablôt, handing the shyly smiling girl a toy spade with a bright green handle and a wreath of early spring flowers painted on the tiny blade. what a feast they had in honor of their distinguished guest! "i suppose," said grandmother to uncle pablôt, "that you have traveled a great distance since last you visited us?" "yes, yes," said uncle pablôt, flourishing the wing of a duck. "i have breezed about a bit, here, there, and everywhere. would you like to hear a little about my travels?" "oh, please!" begged philippe, although the question had not been addressed to him. "now there is india," commenced uncle pablôt, "a very hot country, but as gay as a circus — " and over the roast duck he told them many things in his soft and flowing voice, of elephants, their enormous bodies painted brilliantly in curlicues, circles, and zigzags, swaying through narrow streets like clumsy ships of the land, ridden by dark-skinned potentates robed in ivory satin and scarlet brocades, wearing precious jewels more sparkling than broken bits of colored glass . . . of softly stepping and treacherous tigers prowling in deep jungles, of lions and leopards, crouching panthers and laughing hyenas and all manner of beasts . . . of birds with emerald crests, sapphire wings, breasts of flaming orange, long, sweeping tails and screaming falsetto voices that seemed to shatter the air into sharp and hurtling splinters . . . of gorilla fathers with so terrible a power in their long arms that they could uproot a tree as easily as one would pick a dandelion, and gorilla mothers holding babies to their breasts as gently and lovingly as any human mothers . . . of chattering pink monkeys shouting in derisive laughter from their hiding places in the tree tops at passers-by. leaving the wildness of the tropic forests, he told them of queer-shaped temples and pagodas, lifting to the blue of the sky, made of stone carved as beautifully as lace, where lived the leering and laughing gods of the heathen. by the time grandmother had put the crisp green lettuces on the table, uncle pablôt had carried his little audience to far-away china and, without so much as a "by your leave," into the gardens of mandarins and emperors where jasmine filled the air with sweetness, and rose and white peonies bowed their heavy heads around the lily ponds. far away and far away they flew on uncle pablôt's winged words: over snowy mountains tinted with the pink and lavender radiance of the dawn, through the fiery furnace of desert sands where haughty camels plodded their weary course to the beat of arab drum and the mystical rhythm of arab song, up broad rivers where crocodiles basked in the sun . . . past cities with towers and turrets, through the courtyards of palace and castle, into the riot of crowded markets with their laughter and shouting, buying and selling, into a land where the streets were water, where the buildings had wings that turned and turned, where the men and boys wore tight little jackets of velvet fastened with brass buttons, and trousers as big as two sacks sewn together. "oh, yes," said uncle pablôt, "and they all wear wooden shoes so that they can walk safely across the streets of water without sinking." "remarkable!" said grandmother. "if true," said grandfather, but he spoke so low that every one thought that he was merely choking, and paid no attention to him. "more!" pleaded philippe. "and i was in england the other day," continued uncle pablôt, who needed little urging, "where i visited the royal family. that is nothing," he said, in answer to a look of proud astonishment from grandmother. "i have a great many acquaintances in all walks of life. once i mussed up the hair of a prince and ran off with the parasol of a duchess, just by way of a little joke, you know. did i ever tell you — " but if he ever had, he told them again, and at such length that, though the dinner had come to an end, and grandmother had cleared away the dishes and given anjou a saucer of milk and a bone, he was still telling them this and other monstrous adventures in his quick, easy voice. how thrilling it all was to philippe. it seemed to him that the gay words flew from his uncle's mouth and over his head like flocks of wild birds. some of them were quite ordinary little words, as sparrows are ordinary little birds, but others were long and strange like the queer birds his uncle had told him about. or again – this tale of other lands and peoples was like music to which the crackling of the fire and the drip, drip of the rain outside made a soothing accompaniment. he tried hard to keep his eyes and ears wide open, but, to tell the truth, he had eaten very heartily of grandmother's delicious dinner, and that, with the darkness of the room, the lullaby singsong of his uncle's voice, and the soft purring of anjou, made him heavy-headed and in danger of falling into sleep at any moment. voices came to him through the fog of smoke, sounding far, far away. he heard his uncle say, "but you, grandfather joseph, you should go about the world a bit and see for yourself these wonderful things." "i am content," replied a soft, old voice. "yes, you are content to stay where you are put, or at best to drift around a bit, eh?" and then the old man saying, "i drift – i drift – i drift – "   maybe it was then that philippe went to sleep, or, on the other hand, maybe it was then that philippe overcame his drowsiness and woke up to a new interest in things. certainly, strange and exciting happenings took place in rapid succession. it started with grandmother going to the window where she stood on tiptoe and looked out at the river. "oh," she cried, and her voice was younger and happier than philippe had ever heard it before. "oh! the river has grown up; never before have i seen my darling child so strong and beautiful. and how he runs and laughs! in another minute he will be at the sill of the window. i will open the door and invite him in." "no, no!" cried grandfather weakly, jumping up from the chair and staring wildly about the room. "it will be the end of me." "but think, joseph, how my child will love it! he will splash and laugh – why, even now i can see him creeping under the door in his eagerness." without a word, gathering the baby avril into his arms, grandfather dashed out of the other door; and they watched him running across the fields and meadows, his white hair and beard flying back over his shoulders in the mad speed of his flight. "now there is a strange man," grandmother said to uncle pablôt. pablôt only whistled softly and looked wise. "one would think," continued grandmother, "that he would be grateful for a nice trip on the back of my child. he will come to my way of thinking all in good time." she looked around her critically. "the fire!" she said. "how fiercely the fire is burning! it quite makes me boil with anger; i won't have it, i hate it!" and she ran upon it, scattering the embers with a great hissing sound. "there now!" turning again to pablôt. "do you think that the room is in readiness for my son? shall i open the floodgates and let him in?" "how about anjou?" asked uncle pablôt. "anjou can ride in his basket." "and philippe?" "the little cradle by the bed that avril sleeps in – an excellent boat! jump in, philippe, run and jump in, for we are going to make a voyage. i – let me see - this tub will suit me nicely; i have a fondness for tubs; and you, pablôt, can run along the bank. into your basket, anjou, quick! you look strangely unhappy, my pet. are we all ready? enter, my son!" grandmother unlatched the door facing on the river; it flew back against the wall with a crash. what happened next was very confused in the mind of the startled philippe. there was a great, swishing roar as the water of the river, swollen to unheard-of heights by the hard rain, leaped and tumbled into the room in masses and billows of silver foam. tightly he clutched the rail of the crib as his strange boat tossed and turned and ducked and pitched and bobbed and spun around and around in the currents and cross currents and boiling waves. at last, when the water in the room had reached the level of the water outside, and therefore had suddenly quieted, he dared to look about him. uncle pablôt had disappeared; grandmother was calmly sitting in her tub with a rapturous smile on her old face. "so impulsive!" she remarked conversationally to philippe. "my son, the river," she explained. "he is so very glad to see me. did you notice how he jumped and romped when i let him in? it made me very proud! but we must not waste our time floating idly here; there is to be a very important reunion of my whole family." and with that they were caught in an eddying current and swept out of the door: anjou, with tail as erect as a mast; philippe, wide-eyed and silent in his cradle boat; and grandmother in her wooden tub, pleased and proud, the happy tears streaming down her cheeks. once you get over being frightened, it is really great good fun, so philippe found, to go racing along a swift-flowing river in a little boat that nods to each passing wave. they passed tall reeds and rushes that waved gracefully to them from the shore, weeping willow trees, their wands gray-green and crystal with rain, gently caressing the surface of the water, emerald fields patterned with yellow flowers shining wet, mallows by the river's edge, white with glowing hearts of deep pink, deep pink with hearts of white. sometimes swiftly, sometimes slowly, but always and ever onward, "grandmother's son" carried them on his strong back; now through lowlands, and now between high banks of dark chocolaty mud, where, from the black portals of burrows and tunnels, the bright eyes of water animals gazed at them in astonishment. yes, it was thoroughly delightful, but it was puzzling grandmother in her wood tub. to philippe; there were many things that he did not understand. he decided that he would ask grandmother, who was floating close to him in her wooden tub. "grandmother marianne," he called to her, "why do you call the river your son?" "look at me, philippe. have i not changed?" asked grandmother. "i am no longer grandmother marianne," she said, "i am grandmother rain! . . . without me there would be no puddles, no pools, no lakes, no ponds, no rills and runs and rivulets, no brooks and streams, no waterfalls, no rivers – their lovely and happy voices would die from the land. they are all my children. and if it were not for my children, there would be no ocean." "what is the ocean?" asked philippe, who had never been to the seashore. "that, my philippe," said grandmother rain, "is where i was born, and where all my children return. it is a beautiful place! and how your uncle loves to play there – a decidedly worthy man, your uncle, though at times a trifle flighty." they passed a grove of trees, their bright branches reaching out over the water. "how fresh and strong they look," cried grand- mother rain. "they are always glad to see me, i can assure you. oh, i have strange adventures, philippe. sometimes i am buried in the soft, brown earth, and you would think that would be the end of me, now wouldn't you? but no! i creep back into the air through trunks of trees, through blades of grass and stalks of flowers, and through the shoots of young corn. i trickle through an endless maze of underground passages into deep wells, or until i find a place where i can come bubbling up to the surface. every living thing needs me and every living thing loves me, except sometimes little boys kept in from play – eh?" philippe felt guilty, and was about to apologize when grandmother rain put him at rest. "that is not quite true. there are others," she said, "who do a good deal of complaining about me; they say that i am an old spoil-sport just because i try to make myself pleasant at their parties and picnics. but if i were to leave them forever – " she made an odd little gesture of despair. "would you like me to sing you a song?" she asked unexpectedly. "it might serve to pass the time." "please," said philippe, who was getting a bit tired of floating aimlessly and never arriving anywhere. "very well." and this is what she sang: grandmother rain's song "pitapat, pitapat, drip, drip, drip – pitapat, pitapat, slip, slip, slip, over roofs and windows, over garden walls, over fields and meadows – the gray rain falls! "i fall upon the countryside, upon the city square; i tap the silk umbrellas that are opened everywhere; i wash away the dirt and dust that cloud the flower's face; i fall on royal palaces, and in the market place – for no one is too regal, and no one is too low to receive the crystal blessing that i scatter as i go. i freshen up the thirsty world, and make it clean and green, the grass grows tall, and flowers bloom wherever i have been. although i lie in gutters, and slip through hole and crack, and sometimes have my little joke by running down your back, i make small children happy, for on me they may float their shiny bright, their red and white, their little new toy boat. so think not that because i fall like tears i may be sad: the sparkle in each drop of me is proof that i am glad! "pitapat, pitapat, drip, drip, drip – pitapat, pita – "ah! there he comes!" cried grandmother rain excitedly, forgetting to finish her song. "who?" asked philippe, curious, like most boys. "who indeed?" replied grandmother. "look up the shore. now we will have some sport!" philippe did as he was told, and saw a small figure hurrying toward them at a great pace. as the figure drew nearer, he saw that it was uncle pablôt, running along the edge of the water and stirring it to frenzy. "hold tight!" warned grandmother from her tub. philippe needed no warning, for as uncle pablôt drew opposite to them, waves broke the smooth surface of the river and tossed his little crib about like a cockle shell. he could see, as he was twisted about, that the rising waves were creeping over the edge of grandmother rain's tub and swamping it – it was sinking lower and lower. "be careful, grandmother!" he cried frantically. "this is what i call delightful!" replied that remarkable woman, tipping her tub until the water ran in and filled it with a deep gurgle. as she sank into the river she clapped her hands, whereupon there was a blinding flash and a peal of sharp thunder. a bigger wave than the rest washed philippe, cradle and all, upon the shore. he was too dazed to understand for some moments just what had happened, but at length he spied grandmother, already at some distance, riding the waves and swimming strongly with the current. "now i shall be in high time for the reunion!" she called back to him, the growing space between them making her voice very faint. poor, dear grandmother! whatever would become of her? she would drown most surely. but perhaps uncle pablôt, who had raced on down the bank, could save her – but no! he was strolling back; he had given up. philippe ran to meet his uncle with tears in his eyes. "hello! so there you are, safe and sound and high and dry, eh? you see, i veered about; i thought we might take a little stroll together," explained uncle pablôt airily. "save her!" pleaded philippe tearfully. "who? grandmother rain? be calm, my boy, she is quite in her element." "but unless we do something, the river will carry her far away!" "which is exactly what she wishes. she will be back again, never worry. she makes these little trips to the ocean quite frequently. look, philippe, the sun is coming out! the sun and grandmother rain do not get along well together; he always hides as soon as she has made her appearance, and when she has gone, he goes about mopping up the whole countryside." uncle pablôt's calmness gave philippe some comfort. he was grown-up, and therefore wise; perhaps he knew the meaning of these strange things. "do they always disagree, grandmother and the sun?" asked philippe. "not always. sometimes, though rarely, you may see them together, and then they hang a rainbow flag across the sky as a sign of their truce. but come! we have much land to cover, we must hurry a little more." "where are we going, uncle pablôt?" "what a silly question! how am i to know? i go wherever it pleases me at the moment, sometimes for days in one direction, and at other times this way and that quicker than you can think. and please do not call me uncle pablôt; i am your uncle wind." philippe felt rebuked; he trotted silently beside the tall, lean fellow, thinking him a not very pleasant companion. he would gladly have walked home alone, but he had no idea where he was, and he was afraid to be left alone. at length his uncle wind spoke to him: "do not think unkindly of me, little philippe. if i was cross to you, it is because i am given to complaining at times, but i am a good fellow at heart. with grandmother rain's help, i keep the world a nice clean place to live in. and do you know, philippe, the best part of it is that i am such a humorous fellow; i am all the time playing the most amusing jokes! why – once i mussed up the hair of a prince and ran off with the parasol of a duchess. . . . there now! i think i told you that once before, didn't i? but where and when it is quite past my ability to remember. well, that gives you the idea. hats? there is nothing quite so much fun as hats! snatch a hat and run, drop it until its owner is just about to pick it up, and then snatch and run again. there's nothing that draws such a large and appreciative audience as the hat trick. though, of course, umbrellas are great sport – but i need grandmother rain to help me with that trick. maybe you think i am only a practical joker? not at all! do you remember that day you were sick, and your head felt as if it were on fire? do you remember how i came and cooled it for you, and played with the tassels of the curtains until sundown to keep you amused? if i get a bit angry and rough at times, i am gentleness itself at others, and particularly am i loved in places that are hot and stuffy and saddened by ill health. i am one of the housekeepers of the earth, and i must be everlastingly at it to make things comfortable and shipshape. oh! the dirt and the dust, the smoke and the foul smells people throw into my face in the cities, little dreaming that if it were not for me the earth would be unfit to live on. but i am strong without end and do my best. yes, philippe, i may bluster and blow and play tricks but for all that i am a very excellent fellow. and i am a traveler and adventurer over land and sea, such as one has never read of in the most thrilling books! no one has seen more of the world than i. i have seen strange parts of the world, looked behind walls of ice, where no living thing has ever been. only the other day – " on and on talked uncle wind, and on and on traveled the two together. over more meadows they went than philippe thought could possibly be crowded into the world, and past innumerable herds of cows and flocks of sheep. it had grown warm with the coming of the sun, and often would workers in the fields spread wide their arms and speak words of welcome as they passed. the grass and the yellow wheat bowed as they stepped lightly over them and even the trees nodded in friendly recognition. birds, stretching their wings, took rides on uncle wind's shoulders. at times uncle wind would go quite fast, so that philippe had to run, and again, so slowly that they were scarcely creeping, until, after a long time, they stopped quite still on the top of a high hill. "i often lie down and rest at sunset," explained uncle wind in a voice that was scarcely above a whisper. far, far away, philippe saw, through a twilight haze of gold, what he had never seen before: the deep ocean where grandmother rain was holding her family reunion. the crimson sun was rolling over the blue edge of the world into its sparkling heart. he sat down in the crevice of a rock and thought long and wonderingly of the things that had come to pass that day, and he tried to see, in the land that was spread like a map before his eyes, the red roof and clump of trees that would be his own home. he did so long to be with his darling mother again! and very soon it would be dark. . . . silver stars began to shine in a pale green sky. . . . golden stars were lit in a sky of deepening purple. . . . more and more stars in a sky dark blue. night had suddenly closed in around him, and he was frightened and started to cry. "uncle pablôt – i mean, uncle wind – i want to go home!" but where was uncle wind? there was no answer, no sound, and search as carefully as he would, philippe could find no trace of him. it was as if he had utterly vanished, which, indeed, he had, for the time being. what was poor philippe to do? the hilltop stones that surrounded him took menacing forms; he was sure that he saw the shining eyes, green and glowing, of prowling beasts. he summoned all his courage and bravely started to walk – where? downhill, for he remembered that grandmother rain had told him, as they floated along the river, that that was the only way any sensible person would ever care to travel. besides, when you were on the top of a hill, unless you stayed there, there was no other choice. where else he was bound for he had no idea, but anything would be better than the unbroken stillness of the haunted rocks. how far he walked, at times ran, through the dark night, falling over roots and tearing his way through scratching brambles, pursued by unseen terrors of darkness, before he came to the old man, he had no idea. at first he was timid of approaching the bent figure sitting huddled on a stump, so dim under the starlight. but loneliness and the longing for companionship overcame his fear. "please, sir," he said, drawing slowly closer, "please, sir, could you tell me – grandfather joseph! grandfather joseph!" – and he flung his arms around grandfather's neck, the hot tears streaming down his cheeks. but how cold grandfather was! the touch of grandfather's face against philippe's burned like ice. "watch out!" said grandfather sharply, "you are so insufferably warm you will melt me, if i do not succeed in freezing you first. and, young philippe, be careful the names you call people. look carefully at me again; do you not know me?" philippe was doubtful. surely it was grandfather joseph, and yet – grandfather had never been so cold, nor so strange in his behavior. did he know him? "yes – no," answered philippe, not being able to decide. "yes, snow, that is right! i am grandfather snow." "it's very upsetting!" remarked the puzzled boy. "is it?" replied grandfather snow coldly. "but i may stay here with you, grandfather? i was so frightened alone in the black night. i was out walking with uncle wind, and – and he seemed to disappear, and then i lost my way." "you may stay if you do not come too close. so uncle wind vanished, did he? your uncle wind is a fickle, changeable, unreliable fellow, but he has a will of his own and will turn up in time. i am very dependent on uncle wind; i can do nothing but lie around, without him." "he is very nice, isn't he, grandfather?" ventured philippe. "aye, sometimes," replied the old man. "he was all gentleness this afternoon, but wait until you see him to-night! if i'm not mistaken in the signs, he will be in a fury. then watch out for yourself, young impudence! when uncle wind is in a fury, he is a hard master and drives every one before him with a stinging lash. you'll see!" since grandfather was in such a chilling mood, philippe did not bother to talk with him, but sat at a little distance, thankful for companionship, and watched the winking of the stars, which, even as he watched them, sparkled and went out like sparks in the soot of a chimney, or as if a black curtain were being drawn across the black sky. after a long while, after the last star had vanished and the noiseless quiet of the night hemmed them in like an invisible wall, grandfather snow sprang to his feet and stood tensely listening with his hand to his ear. "what is it, grandfather?" philippe asked, alarmed. "hush! . . . hush! . . . ah – now i hear it plainly!" philippe put his hand to his ear as he had seen grandfather do, and listened intently, holding his breath that he should not miss the tiniest sound. nothing. yes – a far away and tiny sound. it sounded to philippe like the little gasping noises he had made when he was learning to whistle, before ever he had been able to attempt a tune, the noise of air breathed in and out through rounded lips. "he is coming!" grandfather told him in a voice trembling with excitement. "and he is perfectly furious; seldom have i heard him whistle more beautifully. listen!" philippe no longer had to strain to hear the far-away whistling; it was growing nearer every second, and as it approached it became high and shrill. "is that my uncle wind making all that noise, grandfather?" "aye!" said grandfather shortly, crouching close to the ground in the position of a runner about to start a race. "i shall run and meet him," cried philippe, delighted at the idea of seeing his old friend again, who was now evidently very close. he had not run twelve steps when something spinning through the dark ran squarely into him, bowled him off his feet and rolled him along the ground as easily as if he had been made of thistledown. it was a terrific struggle he had to gain his feet again, and even when he had, and would have liked to stop to catch his breath and dust off the new suit his mother had made for him, he found himself being shoved roughly from behind. "faster! faster! faster!" screamed a voice in his very ears. and if he tried to slow up ever so little, "rush! rush! rush!" the voice would command. "faster! faster! faster!" "please, uncle wind – oh, please, uncle wind – i can't go any faster – my legs aren't long enough!" "faster!" screamed uncle wind in anger, prodding poor philippe so hard that he was fairly lifted off his feet. above them, and all around them, there was the noise of tearing leaves and crashing branches, there was the groaning of tortured trees as uncle wind lashed them with his invisible cat-o'-nine-tails. dim shadows streaked past like flying beasts. "rush!" shrieked uncle wind, "r-u-shshsh-shshshshsh–" something cold and stinging struck across philippe's face, and it was then, in spite of his breathless panic at the mad flight, that he wanted to burst out laughing, for he saw that grandfather, who had all this time been running at his side, was going so fast that he was actually losing his whiskers! "your whiskers, grandfather! the wind is tearing your whiskers off!" but the old man, who was speeding along more lightly than any rabbit, paid no attention. in truth, it seemed no great calamity, for as fast as uncle wind would tear off his whiskers and his hair and scatter them on the ground, new would grow immediately – and so thick and fast they grew that the ground became covered with white. but whiskers were not cold and wet when they brushed across one's face: they scratched and tickled, as philippe had found out on occasions when he had kissed grandfather. this was snow! grandfather snow was spreading his white blanket over the earth. all night long uncle wind and grandfather snow sped across the dark country like mad men, and when little philippe grew too tired to stand it any longer, uncle wind would lift him up in his strong arms and carry him. and the snow grew deep, and eddied and twisted into great mounds and high drifts with sharp, curved edges like the thin crests of waves – so that in the cold, pale light of the coming morning, the world looked like a beautiful dream cut from marble. and with the coming of dawn, uncle wind suddenly stopped driving them. "that was a great run!" said uncle wind. "it has left me completely out of puff. philippe, my boy, i hope it hasn't tired you too much? grandfather snow, didn't i drive you beautifully?" "aye." "and you have not done so badly. it will be some days before we are in shape for another run like that. well, good-by! i think i shall do my famous vanishing act again. how about you, grandfather?" "not quite yet. i shall linger on a bit. there are a few touches, a few light touches i neglected in my hurry last night that i would like to attend to this morning. you see," he explained to philippe when uncle wind had vanished, "i am quite an artist. some people think i am very little use and only good for lying around. not at all! i make excellent snowballs, for one thing, and uncle wind is not the only member of our family who has knocked a hat off! but of course i would never tell you of such a thing if i did not know that you were too much of a gentleman to use me for such a purpose. no, no, my child, i work as hard for the things that grow, in my own way, as grandmother rain does in hers, but chiefly i delight to make things beautiful. see that naked gray tree? how bare and cold it looks! it needs a few high lights that i could not stop to give it last night – " whereupon grandfather snow touched each branch and twig with a powdering from his white beard, and the twig and branch of every tree around, until the whole world above the level of the ground was a tracery of gleaming, fairy lace. "not bad, philippe, not a bit bad! can you see anything else that needs touching up? speak out before it is too late, for my supply is nearly exhausted." "please, grandfather, it is beautiful, but i am cold and tired, and i would like to go where it is warm." "of course you would, my child. look! below us in the valley it is green, and even from here one can see that there are flowers. run on down — " "i don't want to run; i'm tired of running!" "well, well," laughed grandfather, "walk then, if you wish. after a while, when the warm sun comes to view my handiwork, i, too, will slip down into the valley, but i shall not stop there. no, i have a long way to travel before i join grandmother rain once more." philippe turned slowly away, touched by the purity and peace that surrounded him. "good-by . . . good-by . . . " said grandfather snow gently, very, very gently! as philippe reached the green valley below, the sun broke through a thin veil of silver clouds. it had risen brilliant and white from its all night dip into the distant ocean, and its cheering warmth was gratefully received by the tired adventurer. a fragrance, mingled of evergreens and flowers, herbs and damp earth, filled the motionless air, and from the end of the grass-grown lane, along which he walked lazily, there was an amazing confusion of sounds, as if thousands of birds were singing at one time. the lane led him to a gate, and on the gate was a sign which said: philippe's garden "i must have been away a long time for my garden to have grown so big," philippe told himself. standing inside the gate was little avril in a new green smock prettily embroidered with wreaths and garlands of flowers. she curtsied so low before him that the hem of her dress brushed the young shoots of grass; and she smiled at him tenderly. "and who are you?" asked philippe warily. "why, philippe! don't you know me?" "yes, i think i do; but i thought that i knew grandmother marianne and she turned out to be grandmother rain. uncle pablôt, it seems, was not uncle pablôt at all, but uncle wind. and my grandfather joseph is grandfather snow and lies just above us on the hill. it is very puzzling; can i be sure that you have not changed your name?" "i have quite a number of names," explained the little girl. "some call me spring, some call me flora, but you may call me avril. avril: april – it is all the same. would you like me to show you your garden? it is very lovely, and i have worked hard to get it all in readiness for your coming." "you?" "yes. i am your gardener, but i have had a lot of help. every one has been so kind! uncle wind helped me plant it, grandfather snow prepared the ground in fine shape, and grandmother rain has been here often and often, giving my little plant babies their bottles. it has been a lot of worry and care, philippe," avril told him in a curiously grown-up voice, "but when you see my beautiful children, i am sure that you will think that it was worth while. "now here," she said, smiling happily and taking him by the hand, "are some of my first babies: the snowdrops, named in honor of their godfather, grandfather snow. and here – " from bed to bed, from border to border they from flower to flower they wandered. wandered, looking at the flowers, breathing the sweet perfume, and watching the clumsy but clever bees, out marketing for honey which they would pay for with golden pollen dust carried on their velvet backs. there were soft-petaled pansies as dark as midnight, as purple as a queen's dress, as yellow as the sun, and sometimes of many colors curiously combined to form impish and laughing faces. there were lilies of the valley and violets, stonecrop and candytuft, peonies and roses, larkspur and bridal wreath – so many flowers that philippe could not remember their names, but gave himself up to the enjoyment of their soft and gorgeous colors, their delicate and magnificent shapes. farther along the maze of paths where he was led by avril, the flowers were still furled in tight buds, and at length they came to beds where the dark loam was scarcely more than broken by lifting sprouts. "these are for later," explained his fairylike guide. "and these?" asked philippe, when they had entered into a new part of the garden where straight rows of green-growing things were marked off in beds of checkerboard design. "these funny little fellows," avril told him, "are not as beautiful and proud as the flowers; they hold their heads less high, but they are all extremely worthy and one would find it difficult to get along without them." "they look good enough to eat," said philippe, who was beginning to feel very empty. "they are," said avril. "and is all this garden mine?" asked philippe. "yes," answered the little girl, curtsying again before him, and added: "all yours — king philippe!" "oh, you mustn't call me 'king,' that is, when we're not playing games, you know," philippe warned her, rather shocked. "kings are grand people with treasures hidden away in strong chests, and they wear crowns of gold and have thousands of servants. i know, because i have read all about them in a book which my mother gave to me. i am a farmer's son, and can never be so wonderful a person as a king." his companion looked at him very thoughtfully, and at last spoke: "you are a king, philippe. sun, moon, and stars shine down upon your head a crown; the whole earth is yours, the great strong chest of hidden treasures. from the time the first small star hung like a lonely spark in space, your servants have been preparing for you a kingdom, the kingdom of earth, than which there is only one greater. and that kingdom, too, will be yours some day if you rule wisely and well in this, and are kind, and strong – and gentle." "it may be true," said philippe, rather bewildered by the wonderful things he was hearing. "but i am quite sure that i have no servants; why – little though i am, even i must help my father in the fields." "we are all your servants. is it not true, grandmother rain?" a shower suddenly passed over the garden, decking the flowers in crystal splendor, and from a small cloud overhead philippe could distinctly hear the voice of grandmother: "yes. i have worked for philippe's father and his grandfathers from the very beginning of things, and i hope to work for his children and his childrens' children for time evermore. do not think badly of me, philippe, if i do not come and go just to your liking, for i am very busy,with much important work to attend to." "is it not true, grandfather snow?" "aye, so it is!" came a voice from the bright hill beyond the garden wall. "is it not true, uncle wind?" "well, well! i am just in time," remarked uncle wind, sauntering up the garden path, the flowers nodding to him as he passed. he had cast aside his great cloak, but even then looked a little warm. "just wandered up from the southlands," he continued. "yes, my little darling, it is true enough what you are telling philippe, but of course we are not to be bossed about like ordinary servants; we serve and yet we keep our independence; we have been at our various tasks so long that we know exactly what to do without being told, and if we seem a little lazy at times, or a little too enthusiastic at others, remember that we may have our own very good reasons. yes, indeed," he went on, commencing to bluster a bit, "there are often reasons hidden in the strangest things we do. did i ever tell you how once i mussed up the hair of a prince and ran off with the parasol of a duchess – " "the wind is capable of being a little monotonous at times," avril whispered into philippe's ear, but he could hardly hear her, for the garden was being filled with other voices, coming from here, there, and everywhere – from the grass, and the flowers, and the vegetables, and the trees, from the stones, and even from the brown earth itself, and they all were saying in their own way, the one thing: "we serve!" "please listen to us a moment," pleaded the fragile voices of the flowers. "we serve too, though many consider us too delicate and concerned about our own looks to be of much use. but do not forget us, philippe! do not forget us when you are grown up and your mind is crowded with worries and cares and a lot of things that will seem more important to you than they really are. keep a place for us in your mind and heart, and we will repay you in our mysterious way a hundredfold and more. do as we ask; treasure beauty, purity, and truth – for though you may love us now, you will not understand the full importance of our message until you have grown up. do not forget – " "the flowers are very talkative to-day," remarked one little lettuce to another. "the flattery of the bees has quite turned their heads," agreed a radish who was notably sharp, whereupon some of the more sensitive flowers who had overheard blushed deeply. but philippe heard none of this chatter of the vegetables, for it seemed that the whole world, the ox and the ass, the horse and the cow, the tame beasts of the fields and the wild beasts of the spaces beyond, the fox and the rabbit, the mouse and the beetle, the creatures that crawled and the creatures that ran, the cricket and the grasshopper and the inhabitants of air and ocean, the little hills and high hills, the valleys and forests, the voice of water through the land, sky and earth – all, all were joining in a great, droning chant: "we serve – we serve – we serve – " "what utter nonsense!" shouted a little bird saucily, flying from the low branches of a tulip tree. "i serve no one; i just have lots of fun, and i'm going to have an exciting fly – and that's something little boys can't do, for they haven't even any pin feathers!" the cocky way the little bird flapped her wings and tossed her head made philippe double up with laughter. "see!" said the little rebel's mate, flying close. "you have made the king laugh, so your empty boasting has broken like a bubble, for laughter is one of the greatest services in the world! and as for going on your wild flight, have you forgotten our pretty blue eggs in their soft brown nest?" "i am a king!" said philippe in a daze of wonderment. "my darling avril, tell me what i can do to show my gratitude to all my servants." "they love nothing better than that you use them, philippe. use them wisely and well, and not only for yourself – but for others." and gentle spring kissed him upon the lips, filling his heart with love and happiness. "it is high time," said philippe's mother to philippe's father, "that our little one was back. soon it will be dark." she went to the doorway and gazed across the fields. "here comes pablôt," she called back into the room, "and he is carrying the child in his arms." "sh-h-h-h-h!" breathed uncle pablôt, drawing close. "take your son gently into your arms; he has been sleeping bravely all the way from his grandparents'. and here," said uncle pablôt, "is his little silver whistle, by which i hope that he will remember me when he wakes up and finds me gone." about this edition illustrations may differ in size and location from those in the original book. edited by mary mark ockerbloom none none none futurearch, or the future of archives... monday, september this blog is no longer being updated but you will find posts on some of our digital archives work here: http://blogs.bodleian.ox.ac.uk/archivesandmanuscripts/category/activity/digital-archives/  posted by susan thomas at : no comments: thursday, october born digital: guidance for donors, dealers, and archival repositories today clir published a report which is designed to provide guidance on the acquisition of archives in a digital world. the report provides recommendations for donors and dealers, and for repository staff, based on the experiences of archivists and curators at ten repositories in the uk and us, including the bodleian. you can read it here: http://www.clir.org/pubs/reports/pub posted by susan thomas at : no comments: labels: acquisitions, dealers, donors, guidance, scoping, sensitivity review, transfers thursday, january digital preservation: what i wish i knew before i started the digital preservation coalition (dpc) and archives and records association event ‘digital preservation: what i wish i knew before i started, ’ took place at birkbeck college, london on january . a half-day conference, it brought together a group of leading specialists in the filed to discuss the challenges of digital collection. william kilbride kicked off events with his presentation ‘what’s the problem with digital preservation’. he looked at the traditional -or in his words "bleak"- approach that is too often characterised by data loss. william suggested we need to create new approaches, such as understanding the actual potential and value of output; data loss is not the issue if there is no practical case for keeping or digitising material. some key challenges facing digital archivists were also outlined and it was argued that impediments such as obsolescence issues and storage media failure are a problem bigger than one institution, and collaboration across the profession is paramount. helen hockx-yu discussed how the british library is collaborating with other institutions to archive websites of historical and cultural importance through the uk web archive. interestingly, web archiving at the british library is now a distinct business unit with a team of eight people. like william, helen also emphasised how useful it is to share experiences and work together, both internally and externally. next, dave thompson, digital curator at the wellcome library stepped up with a lively presentation entitled ‘so you want to go digital’. for dave, it is “not all glamour, metadata and preservation events”, which he illustrated with an example of his diary for the week. he then looked at the planning side of digital preservation, arguing that if digital preservation is going to work, not only are we required to be creative, but we need to be sure what we are doing is sustainable. dave highlighted some key lessons from his career thus far: .     we must be willing to embrace change .     data preservation is not solely an exercise in technology but requires engagement with data and consumers. .     little things we do everyday in the workplace are essential to efficient digital preservation, including backup, planning, it infrastructure, maintenance and virus checking. .     it needs to be easy to do and within our control, otherwise the end product is not preservation. .     continued training is essential so we can make the right decisions in appraisal, arrangement, context, description and preservation. .     we must understand copyright access. patricia sleeman, digital archivist at university of london computer centre then highlighted a selection of practical skills that should underpin how we move forward with digital preservation. for instance, she stressed that information without context is meaningless and has little value without the appropriate metadata. like the other speakers, she suggested planning is paramount, and before we start a project we must look forward and learn about how we will finish it. as such, project management is an essential tool, including the ability to understand budgets. adrian brown from the parliamentary archives continued with his presentation 'a day in the life of a digital archivist'. his talk was a real eye-opener on just how busy and varied the role is. a typical day for adrian might involve talking to information owners about possible transfers, ingesting and cataloguing new records into the digital repository, web archiving, providing demos to various groups, drafting preservation policies and developing future requirements such as building software, software testing and preservation planning. no room to be bored here! like dave thompson, adrian noted that while there are more routine tasks such as answering emails and endless meetings, the rewards from being involved in a new and emerging discipline far outweigh the more mundane moments. we then heard from simon rooks from the bbc multi-media archive who described the varied roles at his work (i think some of the audience were feeling quite envious here!). in keeping with the theme of the day, simon reflected on his career path. originally trained as a librarian, he argued that he would have benefited immensely as a digital archivist if he had learnt the key functions of an archivist’s role early on. he emphasised how the same archival principles (intake, appraisal and selection, cataloguing, access etc.) underpin our practices, whether records are paper or digital, and whether we are in archives or records management. these basic functions help to manage many of the issues concerning digital content. simon added that the oais functional model is an approach that has encouraged multi-disciplinary team-work amongst those working at the bbc. after some coffee there followed a q&a session, which proved lively and engaging. a lot of ground was covered including how appropriate it is to distinguish 'digital archivists' from 'archivists'. we also looked at issues of cost modelling and it was suggested that while we need to articulate budgets better, we should perhaps be less obsessed with costs and focus on the actual benefits and return of investment from projects. there was then some debate about what students should expect from undertaking the professional course. most agreed that it is simply not enough to have the professional qualification, and continually acquiring new skill sets is essential. a highly enjoyable afternoon then, with some thought-provoking presentations, which were less about the techie side of digital preservation, and more a valuable lesson on the planning and strategies involved in managing digital assets. communications, continued learning and project planning were central themes of the day, and importantly, that we should be seeking to build something that will have value and worth. posted by anonymous at : no comments: tuesday, november transcribe at the archive i do worry from time to time that textual analogue records will come to suffer from their lack of searchability when compared with their born-digital peers. for those records that have been digitised, crowd-sourcing transcription could be an answer. a rather neat example of just that is the archive platform from the national archives of australia. arhive is a pilot from naa's labs which allows anyone to contribute to the transcription of records. to get started they have chosen a selection of records from their brisbane office which are 'known to be popular'. not too many of them just yet, but at this stage i guess they're just trying to prove the concept works. all the items have been ocr-ed, and users can choose to improve or overwrite the results from the ocr process. there are lots of nice features here, including the ability to choose documents by a difficulty rating (easy, medium or hard) or by type (a description of the series by the looks of it). the competitive may be inspired by the presence of a leader board, while the more collaborative may appreciate the ability to do as much as you can, and leave the transcription for someone else to finish up later. you can register for access to some features, but you don't have to either. very nice. posted by susan thomas at : no comments: labels: crowdsourcing, searchability, transcription friday, october atlas of digital damages an atlas of digital damage has been created on flickr, which will provide a handy resource for illustrating where digital preservation has failed. perhaps 'failed' is a little strong. in some cases the imperfection may be an acceptable trade off. a nice, and useful, idea. contribute here. posted by susan thomas at : no comments: labels: corruption, damage saturday, october dayofdigitalarchives yesterday was day of digital archives ! (and yes, i'm a little late posting...) this 'day' was initiated last year to encourage those working with digital archives to use social media to raise awareness of digital archives: "by collectively documenting what we do, we will be answering questions like: what are digital archives? who uses them? how are they created and managed? why are they important?" . so in that spirit, here is a whizz through my week. coincidentally not only does this week include the day of digital archives but it's also the week that the digital preservation coalition (or dpc) celebrated its th birthday. on monday afternoon i went to the reception at the house of lords to celebrate that landmark anniversary. a lovely event, during which the shortlist for the three digital preservation awards was announced. it's great to see three award categories this time around, including one that takes a longer view: 'the most outstanding contribution to digital preservation in the last decade'. that's quite an accolade. on the train journey home from the awards i found some quiet time to review a guidance document on the subject of acquiring born-digital materials. there is something about being on a train that puts my brain in the right mode for this kind of work. nearing its final form, this guidance is the result of a collaboration between colleagues from a handful of archive repositories. the document will be out for further review before too long, and if we've been successful in our work it should prove helpful to creators, donors, dealers and repositories. part of tuesday i spent reviewing oral history guidance drafted by a colleague to support the efforts of oxford medical alumni in recording interviews with significant figures in the world of oxford medicine. oral histories come to us in both analogue and digital formats these days, and we try to digitise the former as and when we can. the development of the guidance is in the context of our saving oxford medicine initiative to capture important sources for the recent history of medicine in oxford. one of the core activities of this initiative is survey work, and it is notable that many archives surveyed include plenty of digital material. web archiving is another element of the 'capturing' work that the saving oxford medicine team has been doing, and you can see what has been archived to-date via archive-it, our web archiving service provider. much of wednesday morning was given over to a meeting of our building committee, which had very little to do with digital archives! in the afternoon, however, we were pleased to welcome visitors from mit - nancy mcgovern and kari smith. i find visits like these are one of the most important ways of sharing information, experiences and know-how, and as always i got a lot out of it. i hope nancy and kari did too! that same afternoon, colleagues returned from a trip to london to collect another tranche of a personal archive. i'm not sure if this instalment contains much in the way of digital material, but previous ones have included hundreds of floppies and optical media, some zip discs and two hard disks. also arriving on wednesday, some digital library records courtesy of our newly retired executive secretary; these supplement materials uploaded to beam (our digital archives repository) last week. on thursday, i found some time to work with developer carl wilson on our spruce-funded project. becky nielsen (our recent trainee, now studying at glasgow) kicked off this short project with carl, following on from her collaboration with peter may at a spruce mashup in glasgow. i'm picking up some of the latter stages of testing and feedback work now becky's started her studies. the development process has been an agile one with lots of chat and testing. i've found this very productive - it's motivating to see things evolving, and to be able to provide feedback early and often. for now you can see what's going on at github here, but this link will likely change once we settle on a name that's more useful than 'spruce-beam' (doesn't tell you much, does it?! something to do with trees...) one of the primary aims of this tool is to facilitate collection analysis, so we know better what our holdings are in terms of format and content. we expect that it will be useful to others, and there will be more info. on it available soon. friday was more spruce work with carl, among other things. also a few meetings today - one around funding and service models for digital archiving, and a meeting of the bodleian's elegal deposit group (where my special interest is web archiving). the curious can read more about e-legal deposit at the dcms website.  one fun thing that came out of the day was that the saving oxford medicine team decided to participate in a women in science wikipedia editathon. this will be hosted by the radcliffe science library on october as part of a series of 'engage' events on social media organised by the bodleian and the university's computing services. it's fascinating to contemplate how the range and content of wikipedia articles change over time, something a web archive would facilitate perhaps.  for more on working with digital archives, go take a look at the great posts at the day of digital archives blog! posted by susan thomas at : no comments: labels: acquisition, collection analysis, dayofdigarc, doda , dpc, mashup, spruce, webarchiving friday, june sprucing up the tikafileidentifier as it's international archives day tomorrow, i thought it would be nice to quickly share some news of a project we are working on, which should help us (and others!) to carry out digital preservation work a little bit more efficiently. following the spruce mashup i attended in april, we are very pleased to be one of the organizations granted a spruce project funding award, which will allow us to 'spruce' up the tikafileidentifier tool. (paul has written more about these funding awards on the opf site.) tikafileidentifier is the tool which was developed at the mashup to address a problem several of us were having extracting metadata from batches of files, in our case within iso images. due to the nature of the mashup event the tool is still a bit rough around the edges, and this funding will allow us to improve on it. we aim to create a user interface and a simpler install process, and carry out performance improvements. plus, if resources allow, we hope to scope some further functionality improvements. this is really great news, as with the improvements that this funding allows us to make, the tikafileidentifier will provide us with better metadata for our digital files more efficiently than our current system of manually checking each file in a disk image. hopefully the simpler user interface and other improvements means that other repositories will want to make use of it as well; i certainly think it will be very useful! posted by rebecca nielsen at : no comments: labels: metadata, spruce, tikafileidentifier friday, april spruce mashup: th- th april earlier this week i attended a day mashup event in glasgow, organised as part of the spruce project.  spruce aims to enable higher education institutions to address preservation gaps and articulate the business case of digital preservation, and the mashup serves as a way to bring practitioners and developers together to work on these problems. practitioners took along a collection which they were having issues with, and were paired off with a developer who could work on a tool to provide a solution.  day after some short presentations on the purpose of spruce and the aims of the mashup, the practitioners presented some lightning talks on our collections and problems. these included dealing with email attachments, preserving content off facebook, software emulation, black areas in scanned images, and identifying file formats with incorrect extensions, amongst others. i took along some disk images, as we find it very time-consuming to find out date ranges, file types and content of the files in the disk image, and we wanted a more efficient way to get this metadata. more information on the collections and issues presented can be found at the wiki. after a short break for coffee (and excellent cakes and biscuits) we were sorted into small groups of collection owners and developers to discuss our issues in more detail. in my group this led to conversations about natural language processing, and the possibilities of using predefined subjects to identify files as being about a particular topic, which we thought could be really helpful, but somewhat impossible to create in a couple of days! we were then allocated our developers. as there were a few of us with problems with file identification, we were assigned to the same developer, peter may from the bl. the day ended with a short presentation from william kilbride on the value of digital collections and neil beagrie's benefits framework. day the developers were packed off to another room to work on coding, while we collection owners started to look into the business case for digital preservation. we used beagrie’s framework to consider the three dimensions of benefits (direct or indirect, near- or long-term, and internal or external), as they apply to our institutions. when we reported back, it was interesting to see how different organisations benefit in different ways. we also looked at various stakeholders and how important or influential they are to digital preservation. write ups of these sessions are also available at the wiki.   the developers came back at several points throughout the day to share their progress with us, and by lunchtime the first solution had been found! the first steps to solving our problem were being made; peter had found a program, apache tika, which can parse a file and extract metadata (it can also identify the content type of files with incorrect extensions), and had written a script so that it could work through a directory of files, and output the information into a csv spreadsheet. this was a really promising start, especially due to the amount of metadata that could potentially be extracted (provided it exists within the file), and the ability to identify file types with incorrect extensions. day we had another catch up with the developers and their overnight progress. peter had written a script that took the information from the csv file and summarised it into one row, so that it fits into the spreadsheets we use at beam. unfortunately, mounting the iso image to check it with apache tika was slightly more complicated than anticipated, so our disk images couldn't be checked this way without further work. while the developers set about finalizing their solutions, we continued to work on the business case, doing a skills gap analysis to consider whether our institutions had the skills and resources to carry out digital preservation. reporting back, we had a very interesting discussion on skills gaps within the broader archives sector, and the need to provide digital preservation training to students as well as existing professionals. we then had to prepare an ‘elevator pitch’ for those occasions when we find ourselves in a lift with senior management, which neatly brought together all the things we had discussed, as we had to explain the specific benefits of digital preservation to our institution and our goals in about a minute.  to wrap up the developers presented their solutions, which solved many of the problems we had arrived with. a last minute breakthrough in mounting iso images using  wincdemu and running scripts on them meant that we are able to use the tika script on our disk images. however, because we were so short on time, there are still some small problems that need addressing. i'm really happy with our solution, and i was very impressed by all the developers and how much they were able to get done in such a short space of time. i felt that this event was a very useful way to get thinking about the business case for what we do, and to get to see what other people within the sector are doing and what problems they are facing. it was also really helpful as a non-techie to get to talk with developers and get an idea of what it is possible to build tools to do (and get them made!). i would definitely recommend this type of event – in fact, i’d love to go along again if i get the opportunity! posted by rebecca nielsen at : comments: monday, march media recognition: dv part dvcam (encoding) type: digital videotape cassette encoding introduced: active: yes, but few new camcorders are being produced. cessation: - capacity: minutes (large), minutes (minidv). compatibility: dvcam is an enhancement of the widely adopted dv format, and uses the same encoding. cassettes recorded in dvcam format can be played back in dvcam vtrs (video tape recorders), newer dv vtrs (made after the introduction of dvcam), and dvcpro vtrs, as long as the correct settings are specified (this resamples the signal to : : ). dvcam can also be played back in compatible hdv players. users: professional / industrial. file systems: - common manufacturers: sony, ikegami. dvcam is sony’s enhancement of the dv format for the professional market. dvcam uses the same encoding as dv, although it records ‘locked’ rather than ‘unlocked’ audio. it also differs from dv as it has a track width of microns and a tape speed of . mm/sec to make it more robust. any dv cassette can contain dvcam format video, but some are sold with dvcam branding on them. recognition dvcam labelled cassettes come in large ( . x x . mm) or minidv ( x x . mm) sizes. tape width is ¼”. large cassettes are used in editing and recording decks, while the smaller cassettes are used in camcorders. they are marked with the dvcam logo, usually in the upper-right hand corner.  hdv (encoding) type: digital videotape cassette encoding introduced: active: yes, although industry experts do not expect many new hdv products. cessation: - capacity: hour (minidv), up to . hours (large) compatibility: video is recorded in the popular mpeg- video format. files can be transferred to computers without loss of quality using an ieee connection. there are two types of hdv, hdv p and hdv , which are not cross-compatible. hdv can be played back in hdv vtrs. these are often able to support other formats such as dv and dvcam. users: amateur/professional file systems: - common manufacturers: format developed by jvc, sony, canon and sharp. unlike the other dv enhancements, hdv uses mpeg- compression rather than dv encoding. any dv cassette can contain hdv format video, but some are sold with hdv branding on them.  there are two different types of hdv: hdv p (hd , made by jvc) and hdv (hd , made by sony and canon). hdv devices are not generally compatible with hdv p devices. the type of hdv used is not always identified on the cassette itself, as it depends on the camcorder used rather than the cassette. recognition  hdv is a tape only format which can be recorded on normal dv cassettes. some minidv cassettes with lower dropout rates are indicated as being for hdv, either with text or the hdv logo. these are not essential for recording hdv video.  posted by rebecca nielsen at : no comments: labels: digital video, dvcam, hdv, media recoginition, video media recognition: dv part dv (encoding) type: digital videotape cassette encoding introduced: active: yes, but tapeless formats such as mpeg- , mpeg- and mpeg- are becoming more popular. cessation: - capacity: minidv cassettes can hold up to / minutes sp/lp. medium cassette size can hold up to . / . hrs sp/lp. files sizes can be up to gb per minutes of recording. compatibility: dv format is widely adopted. cassettes recorded in the dv format can be played back on dvcam, dvcpro and hdv replay devices. however, lp recordings cannot be played back in these machines. users: dv is aimed at a consumer market – may also be used by ‘prosumer’ film makers. file systems: - common manufacturers: a consortium of over manufacturers including sony, panasonic, jvc, canon, and sharp. dv has a track width of microns and a tape speed of . mm/sec. it can be found on any type of dv cassette, regardless of branding, although most commonly it is the format used on minidv cassettes.  recognition dv cassettes are usually found in the small size, known as minidv. medium size ( . × . × . mm) dv cassettes are also available, although these are not as popular as minidv. dv cassettes are labelled with the dv logo. dvcpro (encoding) type: digital videotape cassette encoding introduced: (dvcpro), (dvcpro ), (dvcpro hd) active: yes, but few new camcorders are being produced. cessation: - capacity: minutes (large), minutes (medium). compatibility: dvcpro is an enhancement of the widely adopted dv format, and uses the same encoding. cassettes recorded in dvcpro format can be played back only in dvcpro video tape recorders (vtrs) and some dvcam vtrs. users: professional / industrial; designed for electronic news gathering file systems: - common manufacturers: panasonic, also philips, ikegami and hitachi. dvcpro is panasonic’s enhancement of the dv format, which is aimed at a professional market. dvcpro uses the same encoding as dv, but it features ‘locked’ audio, and uses : : sampling instead of : : . it has an micron track width, and a tape speed of . mm/sec which makes it more robust. dvcpro uses metal particle (mp) tape rather than metal evaporate( me) to improve durability. dvcpro and dvcpro hd are further developments of dvcpro, which use the equivalent of or dv codecs in parallel to increase the video data rate. any dv cassette can contain dvcpro format video, but some are sold with dvcpro branding on them. recognition dvcpro branded cassettes come in medium ( . × . × . mm) or large ( × × . mm) cassette sizes. the medium size is for use in camcorders, and the large size in editing and recording decks. dvcpro and dvcpro hd branded cassettes are extra-large cassettes ( x x . mm). tape width is ¼”. dvcpro labelled cassettes have different coloured tape doors depending on their type; dvcpro has a yellow tape door, dvcpro has a blue tape door, and dvcpro hd has a red tape door. images of dvcpro cassettes are available at the panasonic website. posted by rebecca nielsen at : no comments: labels: digital video, dv, dvcpro, media recoginition, video media recognition: dv part dv can be used to refer to both a digital tape format, and a codec for digital video. dv tape usually carries video encoded with the dv codec, although it can hold any type of data. the dv format was developed in the mid s by a consortium of video manufacturers, including sony, jvc and panasonic, and quickly became the de facto standard for home video production after introduction in . videos are recorded in .dv or .dif formats, or wrapped in an avi, quicktime or mxf container. these can be easily transferred to a computer with no loss of data over an ieee (fire wire) connection. dv tape is ¼ inch ( . mm) wide. dv cassettes come in four different sizes: small, also known as minidv ( x x . mm), medium ( . × . × . mm), large ( . x x . mm), and extra-large ( x x . mm). minidv is the most popular cassette size. dv cassettes can be encoded with one of four formats; dv, dvcam, dvcpro, or hdv. dv is the original encoding, and is used in consumer devices. dvcpro and dvcam were developed by panasonic and sony respectively as an enhancement of dv, and are aimed at a professional market. the basic encoding algorithm is the same as with dv, but a higher track width ( and microns versus dv’s micron track width) and faster tape speed means that these formats are more robust and better suited to professional users. hdv is a high-definition variant, aimed at professionals and consumers, which uses mpeg- compression rather than the dv format. depending on the recording device, any of the four dv encodings can be recorded on any size dv cassette. however, due to different recording speeds, the formats are not always backwards compatible. a cassette recorded in an enhanced format, such as hdv, dvcam or dvcpro, will not play back on a standard dv player. also, as they are supported by different companies, there are some issues with playing back a dvcpro cassette on dvcam equipment, and vice versa. although all dv cassette sizes can record any format of dv, some are marketed specifically as being of a certain type; e.g. dvcam. the guide below looks at some of the most common varieties of dv cassette that might be encountered, and the encodings that may be used with them. it is important to remember that any type of encoding may be found on any kind of cassette, depending on what system the video was recorded on. minidv (cassette) type: digital videotape cassette introduced: active: yes, but is being replaced in popularity by hard disk and flash memory recording. at the international consumer electronics show no camcorders were presented which record on tape. cessation: - capacity: up to minutes sp / minutes lp, depending on the tape used; / minutes sp/lp is standard. this can also depend on the encoding used (see further entries). files sizes can be up to gb per minutes of recording. compatibility: dv file format is widely adopted. requires fire wire (ieee ) port for best transfer. users: consumer and ‘prosumer’ film makers, some professionals. file systems: - common manufacturers: a consortium of over manufacturers including sony, panasonic, jvc, canon, and sharp minidv refers to the size of the cassette; as noted above, it can come with any encoding. as a consumer format they generally use dv encoding. dvcam and hdv cassettes also come in minidv size. minidv is the most popular dv cassette, and is used for consumer and semi-professional (‘prosumer’) recordings due to its high quality. recognition these cassettes are the small cassette size, measuring x x . mm. tape width is ¼”. they carry the minidv logo, as seen below: posted by rebecca nielsen at : no comments: labels: digital video, dv, media recoginition, minidv, video monday, january digital preservation: what i wish i knew before i started tuesday th january, last week i attended a student conference, hosted by the digital preservation coalition, on what digital preservation professionals wished they had known before they started. the event covered a great deal of the challenges faced by those involved in digital preservation, and the skills required to deal with these challenges. the similarities between traditional archiving and digital preservation were highlighted at the beginning of the afternoon, when sarah higgins translated terms from the oais model into more traditional ‘archive speak’. dave thompson also emphasized this connection, arguing that digital data “is just a new kind of paper”, and that trained archivists already have - % of the skills needed for digital preservation. digital preservation was shown to be a human rather than a technical challenge. adrian brown argued that much of the preservation process (the "boring stuff") can be automated. dave thompson stated that many of the technical issues of digital preservation, such as migration, have been solved, and that the challenge we now face is to retain the context and significance of the data. the point made throughout the afternoon was that you don’t need to be a computer expert in order to carry out effective digital preservation. the urgency of intervention was another key lesson for the afternoon. as william kilbride put it; digital preservation won’t do itself, won’t go away, and we shouldn't wait for perfection before we begin to act. access to data in the future is not guaranteed without input now, and digital data is particularly intolerant to gaps in preservation. andrew fetherstone added to this argument, noting that doing something is (usually) better than doing nothing, and that even if you are not in a position to carry out the whole preservation process, it is better to follow the guidelines as far as you can, rather than wait and create a backlog. the scale of digital preservation was another point illustrated throughout the afternoon. william kilbride suggested that the days of manual processing are over, due to the sheer amount of digital data being created (estimated to reach zb by !). he argued that the ability to process this data is more important to the future of digital preservation than the risks of obsolescence. the impossibility of preserving all of this data was illustrated by helen hockx-yu, who offered the statistic the the uk web archive and national archives web archive combined have archived less than % of uk websites. adrian brown also pointed out that as we move towards dynamic, individualised content on the web, we must decide exactly what the information is that we are trying to preserve. during the q&a session, it was argued that the scale of digital data means that we have to accept that we can’t preserve everything, that not everything needs to be preserved, and that there will be data loss. the importance of collaboration was another theme which was repeated by many speakers. collaboration between institutions on a local, national and even international level was encouraged, as by sharing solutions to problems and implementing common standards we can make the task of digital preservation easier. this is only a selection of the points covered in a very engaging afternoon of discussion. overall, the event showed that, despite the scale of the task, digital preservation needn't be a frightening prospect, as archivists already have many of the necessary skills. the dpc have uploaded the slides used during the event, and the event was also live-tweeted, using the hashtag #dpc_wiwik, if you are interested in finding out more. posted by rebecca nielsen at : comment: labels: http://www.blogger.com/img/blank.gif tuesday, october what is ‘the future of the past of the web’? ‘the future of the past of the web’, digital preservation coalition workshop british library, october chrissie webb and liz mccarthy in his keynote address to this event – organised by the digital preservation coalition , the joint information systems committee and the british library – herbert van der sompel described the purpose of web archiving as combating the internet’s ‘perpetual now’. stressing the importance to researchers of establishing the ‘temporal context’ of publications and information, he explained how the framework of his memento project uses a ‘ timegate’ implemented via web plugins to show what a resource was like at a particular date in the past. there is a danger, however, that not enough is being archived to provide the temporal context; for instance, although dois provide stable documents, the resources they link to may disappear (‘link rot’). the memento project firefox plugin uses a sliding timeline (here, just below the google search box) to let users choose an archived date a session on using web archives picked up on the theme of web continuity in a presentation by the national archives on the uk government web archive, where a redirection solution using open source software helps tackle the problems that occur when content is moved or removed and broken links result. current projects are looking at secure web archiving, capturing internal (e.g. intranet) sources, social media capture and a semantic search tool that helps to tag ‘unstructured’ material. in a presentation that reinforced the reason for the day’s ‘use and impact’ theme, eric meyer of the oxford internet institute wondered whether web archives were in danger of becoming the ‘dusty archives’ of the future, contrasting their lack of use with the mass digitisation of older records to make them accessible. is this due to a lack of engagement with researchers, their lack of confidence with the material or the lingering feeling that a url is not a ‘real’ source? archivists need to interrupt the momentum of ‘learned’ academic behaviour, engaging researchers with new online material and developing archival resources in ways that are relevant to real research – for instance, by helping set up mechanisms for researchers to trigger archiving activity around events or interests, or making more use of server logs to help them understand use of content and web traffic. one of the themes of the second session on emerging trends was the shift from a ‘page by page’ approach to the concept of ‘data mining’ and large scale data analysis. some of the work being done in this area is key to addressing the concerns of eric meyer’s presentation; it has meant working with researchers to determine what kinds and sources of data they could really use in their work. representatives of the uk web archive and the internet archive described their innovations in this field, including visualisation and interactive tools. archiving social networks was also a major theme, and wim peters outlined the challenges of the arcomem project, a collaboration between sheffield and hanover universities that is tackling the problems of archiving ‘community memory’ through the social web, confronting extremely diverse and volatile content of varying quality for which future demand is uncertain. richard davis of the university of london computer centre spoke about the blogforever project, a multi-partner initiative to preserve blogs, while mark williamson of hanzo archives spoke about web archiving from a commercial perspective, noting that companies are very interested in preserving the research opportunities online information offers. the final panel session raised the issue of the changing face of the internet, as blogs replace personal websites and social media rather than discrete pages are used to create records of events. the notion of ‘web pages’ may eventually disappear, and web archivists must be prepared to manage the dispersed data that will take (and is taking) their place. other points discussed included the need for advocacy and better articulation of the demand for web archiving (proposed campaign: ‘preserve!: are you saving your digital stuff?’), duplication and deduplication of content, the use of automated selection for archiving and the question of standards. posted by lizrosemccarthy at : no comments: labels: future of the past of the web, webarchives, workshop older posts home subscribe to: posts (atom) what's the futurearch blog? a place for sharing items of interest to those curating hybrid archives & manuscripts. legacy computer bits wanted! at bodleian electronic archives and manuscripts (beam) we are always on the lookout for older computers, disk drives, technical manuals and software that can help us recover digital archives. if you have any such stuff that you would be willing to donate, please contact susan.thomas@bodleian.ox.ac.uk. examples of items in our wish list include: an apple mac macintosh classic ii computer, a wang pc / series, as well as myriad legacy operating system and word-processing software. handy links bodleian electronic archives & manuscripts (beam) bodleian library digital preservation coalition oxford university label cloud n umd ( ) access ( ) accession ( ) accessioning ( ) adapter ( ) advisory board ( ) agents ( ) agrippa ( ) amatino manucci ( ) analysis ( ) appraisal ( ) arch enemy ( ) archival dates ( ) archival interfaces ( ) archiving habits ( ) ata ( ) audio ( ) authority control ( ) autogenerated metadata ( ) autumn ( ) bbc ( ) beam architecture ( ) blu-ray ( ) buzz ( ) cais ( ) case studies ( ) cd ( ) cerp ( ) chat ( ) community ( ) content model ( ) copyright review ( ) corruption ( ) creator curation ( ) cunning plan ( ) d-link ( ) dams ( ) data capture ( ) data extraction ( ) data recovery ( ) dead media ( ) desktop ( ) development ( ) dge- t ( ) digital archaeology ( ) digital legacy ( ) digital preservation ( ) digitallivesconference ( ) disk imaging ( ) disks ( ) documents ( ) dundee ( ) dvd ( ) eac ( ) ead ( ) electronic records ( ) email ( ) emcap ( ) emulation ( ) emulators ( ) esata ( ) estate planning ( ) etdf ( ) facebook ( ) faceted browser ( ) file carving ( ) file format recognition ( ) file format specifications ( ) file signatures ( ) film ( ) finding aids ( ) firewire ( ) flash media ( ) floppy disks ( ) forensics ( ) formats ( ) friday post ( ) funny ( ) futurearch ( ) gaip ( ) geocities ( ) gigabit ( ) gmail ( ) google ( ) googledocs ( ) graduate traineeship ( ) hard drive ( ) highslide ( ) holidays ( ) hybrid archives ( ) hybridity ( ) hypertext exhibitions writers ( ) images ( ) indexing ( ) ingest ( ) interfaces ( ) interoperability ( ) intrallect ( ) ipaper ( ) ipr ( ) ipres ( ) ips ( ) island of unscalable complexity ( ) iso ( ) java ( ) javascript ( ) jif ( ) jisc ( ) job ( ) kryoflux ( ) lightboxes ( ) linked data ( ) literary ( ) markup ( ) may ( ) media ( ) media recoginition ( ) metadata ( ) microsoft ( ) microsoft works ( ) migration tools ( ) moon landings ( ) multitouch ( ) music blogs ( ) namespaces ( ) never say never ( ) normalisation ( ) object characteristics ( ) obsolescence ( ) odd ( ) office documents ( ) online data stores ( ) open source ( ) open source development ( ) open source software ( ) optical media ( ) osswatch ( ) pci ( ) planets ( ) planets testbed ( ) plato ( ) preservation planning ( ) preservation policy ( ) preservation tools ( ) projects ( ) pst ( ) repositories ( ) researchers ( ) saa ( ) sas ( ) sata ( ) scat ( ) scholars ( ) scooters ( ) scribd ( ) scsi ( ) semantic ( ) seminars ( ) significant properties ( ) snow ( ) software ( ) solr ( ) sound ( ) steganography ( ) storage ( ) tag clouds ( ) tags ( ) technical metadata ( ) transfer bagit verify ( ) twapperkeeper ( ) tweets ( ) usb ( ) use cases ( ) users ( ) validation ( ) value ( ) video ( ) vintage computers ( ) weavers ( ) webarchives ( ) workshop ( ) xena ( ) xml ( ) xmp ( ) zip disks ( ) blog archive ▼  ( ) ▼  september ( ) this blog is no longer being updated ►  ( ) ►  october ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  june ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) subscribe to futurearch posts atom posts all comments atom all comments beamtweet loading... my blog list the signal: digital preservation passing the mic to our audience: user personas and strategic planning for the sustainability of digital formats website archivesblogs meet ike digital archiving at the university of york latest booking system in google sheets (working!) archivesnext now available: “a very correct idea of our school”: a photographic history of the carlisle indian industrial school practical e-records hello world! born digital archives practical first steps mgolson@stanford.edu's blog keep - keeping emulation environments portable digital curation blog thoughts before "the future of the past of the web" archives hub blog open planets foundation uk web archive technology watch digital lives bits bytes & archives branker's blog dpc rss news feed loading... about me susan thomas view my complete profile none the digital librarian – information. organization. access. ↓ skip to main content the digital librarian information. organization. access. main navigation menu home about libraries and the state of the internet by jaf posted on june , posted in digital libraries no comments mary meeker presented her internet trends report earlier this month. if you want a better understanding of how tech and the tech industry is evolving, you should watch her talk and read her slides. this year’s talk was fairly … libraries and the state of the internet read more » meaningful web metrics by jaf posted on january , posted in web metrics no comments this article from wired magazine is a must-read if you are interested in more impactful metrics for your library’s web site. at mpoe, we are scaling up our need for in-house web product expertise, but regardless of how much we … meaningful web metrics read more » site migrated by jaf posted on october , posted in blog no comments just a quick note – digitallibrarian.org has been migrated to a new server. you may see a few quirks here and there, but things should be mostly in good shape. if you notice anything major, send me a challah. really. … site migrated read more » the new ipad by jaf posted on march , posted in apple, hardware, ipad comment i decided that it was time to upgrade my original ipad, so i pre-ordered a new ipad, which arrived this past friday. after a few days, here are my initial thoughts / observations: compared to the original ipad, the new … the new ipad read more » rd sits meeting – geneva by jaf posted on august , posted in conferences, digital libraries, uncategorized, workshops no comments back in june i attend the rd sits (scholarly infrastructure technical summit) meeting, held in conjunction with the oai workshop and sponsored by jisc and the digital library federation. this meeting, held in lovely geneva, switzerland, brought together library technologists … rd sits meeting – geneva read more » tagged with: digital libraries, dlf, sits david lewis’ presentation on collections futures by jaf posted on march , posted in ebooks, librarianship comment peter murray (aka the disruptive library technology jester) has provided an audio-overlay of david lewis’ slideshare of his plenary at the last june’s rlg annual partners meeting. if you are at all interested in understanding the future of academic libraries, … david lewis’ presentation on collections futures read more » tagged with: collections, future, provisioning librarians are *the* search experts… by jaf posted on august , posted in librarianship no comments …so i wonder how many librarians know all of the tips and tricks for using google that are mentioned here? what do we want from discovery? maybe it’s to save the time of the user…. by jaf posted on august , posted in uncategorized comment just a quick thought on discovery tools – the major newish discovery services being vended to libraries (worldcat local, summon, ebsco discovery service, etc.) all have their strengths, their complexity, their middle-of-the-road politician trying to be everything to everybody features. … what do we want from discovery? maybe it’s to save the time of the user…. read more » putting a library in starbucks by jaf posted on august , posted in digital libraries, librarianship no comments it is not uncommon to find a coffee shop in a library these days. turn that concept around, though – would you expect a library inside a starbucks? or maybe that’s the wrong question – how would you react to … putting a library in starbucks read more » tagged with: coffee, digital library, library, monopsony, starbucks, upsell week of ipad by jaf posted on april , posted in apple, ebooks, hardware, ipad comment it has been a little over a week since my ipad was delivered, and in that time i have had the opportunity to try it out at home, at work, and on the road. in fact, i’m currently typing this … week of ipad read more » tagged with: apple, digital lifestyle, ipad, mobile, tablet posts navigation next © | powered by responsive theme conal tuohy's blog conal tuohy's blog the blog of a digital humanities software developer analysis & policy online notes for my open repositories conference presentation. i will edit this post later to flesh it out into a proper blog post. follow along at: conaltuohy.com/blog/analysis-policy-online/ background early discussion with amanda lawrence of apo (which at that time stood for &# ;australian policy online&# ;) about text mining, at the lodlam summit in sydney. they &# ; continue reading analysis &# ; policy online a tool for web api harvesting as stumbles to an end, i&# ;ve put in a few days&# ; work on my new project&# ;oceania, which is to be a linked data service for cultural heritage in this part of the world. part of this project involves harvesting data from cultural institutions which make their collections available via so-called &# ;web apis&# ;. there are &# ; continue reading a tool for web api harvesting oceania i am really excited to have begun my latest project: a linked open data service for online cultural heritage from new zealand and australia, and eventually, i hope, from our other neighbours. i have called the service &# ;oceania.digital&# ; the big idea of oceania.digital is to pull together threads from a number of different &# ;cultural&# ; data &# ; continue reading oceania australian society of archivists conference #asalinks last week i participated in the conference of the australian society of archivists, in parramatta. i was very impressed by the programme and the discussion. i thought i&# ;d jot down a few notes here about just a few of the presentations that were most closely related to my own work. the presentations were all &# ; continue reading australian society of archivists conference #asalinks linked open data visualisation at #glamvr on thursday last week i flew to perth, in western australia, to speak at an event at curtin university on visualisation of cultural heritage. erik champion, professor of cultural visualisation, who organised the event, had asked me to talk about digital heritage collections and linked open data (&# ;lod&# ;). the one-day event was entitled &# ;glam vr: &# ; continue reading linked open data visualisation at #glamvr visualizing government archives through linked data tonight i&# ;m knocking back a gin and tonic to celebrate finishing a piece of software development for my client the public record office victoria; the archives of the government of the australian state of victoria. the work, which will go live in a couple of weeks, was an update to a browser-based visualization tool which &# ; continue reading visualizing government archives through linked data taking control of an uncontrolled vocabulary a couple of days ago, dan mccreary tweeted: working on new ideas for nosql metadata management for a talk next week. focus on #nosql, documents, graphs and #skos. any suggestions? &# ; dan mccreary (@dmccreary) november , it reminded me of some work i had done a couple of years ago for a project which &# ; continue reading taking control of an uncontrolled vocabulary bridging the conceptual gap: museum victoria’s collections api and the cidoc conceptual reference model this is the third in a series of posts about an experimental linked open data (lod) publication based on the web api of museum victoria. the first post gave an introduction and overview of the architecture of the publication software, and the second dealt quite specifically with how names and identifiers work in the lod &# ; continue reading bridging the conceptual gap: museum victoria&# ;s collections api and the cidoc conceptual reference model names in the museum my last blog post described an experimental linked open data service i created, underpinned by museum victoria’s collection api. mainly, i described the lod service’s general framework, and explained how it worked in terms of data flow. to recap briefly, the lod service receives a request from a browser and in turn translates that request &# ; continue reading names in the museum linked open data built from a custom web api i&# ;ve spent a bit of time just recently poking at the new web api of museum victoria collections, and making a linked open data service based on their api. i&# ;m writing this up as an example of one way — a relatively easy way — to publish linked data off the back of some existing &# ; continue reading linked open data built from a custom web api none none none ted lawless ted lawless work notebook automatically extracting keyphrases from text i've posted an explainer/guide to how we are automatically extracting keyphrases for constellate, a new text analytics service from jstor and portico. we are defining keyphrases as up to three word phrases that are key, or important, to the overall subject matter of the document keyphrase is often used interchangeably with keywords, but we are opting to use the former since it's more descriptive we did a fair amount of reading to grasp prior art in this area, extracting keyphrases is a long standing research topic in information retrieval and natural language processing, and ended up developing a custom solution based on term frequency in the constellate corpus if you are interested in this work generally, and not just the constellate implementation, burton dewilde has published an excellent primer on automated keyphrase extraction. more information about constellate can be found here. disclaimer: this is a work-related post i don't intend to speak for my employer, ithaka datasette hosting costs i've been hosting a datasette (https://baseballdb.lawlesst.net, aka baseballdb) of historical baseball data for a few years and the last year or so it has been hosted on google cloud run i thought i would share my hosting costs for as a point of reference for others who might be interested in running a datasette but aren't sure how much it may cost. the total hosting cost on google cloud run for for the baseballdb was $ . , or a monthly average of about $ . usd the monthly bill did vary a fair amount from as high as $ in may to as low as $ in march since i did no deployments during this time or updates to the site, i assume the variation in costs is related to the amount queries the datasette was serving i don't have a good sense of how many total queries per month this instance is serving since i'm not using google analytics or similar. google does report that it is subtracting $ . in credits for the year but i don't expect those credits/promotions to expire anytime soon since my projected costs for is $ . this cost information is somewhat incomplete without knowing the number of queries served per month but it is a benchmark connecting python's rdflib to aws neptune i've written previously about using python's rdflib to connect to various triple stores for a current project, i'm using amazon neptune as a triple store and the rdflib sparqlstore implemenation did not work out of the box i thought i would share my solution. the problem neptune returns ntriples by default and rdflib, by default in version . . , is expecting construct queries to return rdf/xml the solution is to override rdflib's sparqlstore to explictly request rdf/xml from neptune via http content negotiation. once this is in place, you can query and update neptune via sparql with rdflib the same way that you would other triple stores. code if you are interested in working with neptune using rdflib, here's a "neptunestore" and "neptuneupdatestore" implementation that you can use. usable sample researcher profile data i've published a small set of web harvesting scripts to fetch information about researchers and their activities from the nih intramural research program website. on various projects i've been involved with, it has been difficult to acquire usable sample, or test data, about researchers and their activities you either need access to a hr system and a research information system (for the activities) or create mock data mock, or fake data, doesn't work well when you want to start integrating information across systems or develop tools to find new publications it's hard to build a publication harvesting tool without real author names and research interests. to that end, the scripts i've published crawl the nih intramural research program website and pull out profile information for the thousand or so researchers that are members of the program, including a name, email, photo, short biography research interests, and the pubmed ids for selected publications. a second script harvests the organizational structure of the program both types of data are outputted to a simple json structure that then can be mapped to your destination system exploring years of the new yorker fiction podcast with wikidata note: the online datasette that supported the sample queries below is no longer available the raw data is at: https://github.com/lawlesst/new-yorker-fiction-podcast-data. the new yorker fiction podcast recently celebrated its ten year anniversary for those of you not familiar, this is a monthly podcast hosted by new yorker fiction editor deborah treisman where a writer who has published a short story in the new yorker selects a favorite story from the magazine's archive and reads and discusses it on the podcast with treissman. i've been a regular listener to the podcast since it started in and thought it would be fun to look a little deeper at who has been invited to read and what authors they selected to read and discuss. the new yorker posts all episodes of the fiction podcast on their website in nice clean, browseable html pages i wrote a python script to step through the pages and pull out the basic details about each episode: title url summary date published writer reader the reader and the writer for each story is embedded in the title so a bit of text processing was required to cleanly identify each reader and writer i also had to manually reconcile a few episodes that didn't follow the same pattern as the others. all code used here and harvested data is available on github. matching to wikidata i then took each of the writers and readers and matched them to wikidata using the searchentities api. with the wikidata id, i'm able to retrieve many attributes each reader and writer by querying the wikidata sparql endpoint, such as gender, date of birth, awards received, library of congress identifier, etc. publishing with datasette i saved this harvested data to two csv files - episodes.csv and people.csv - and then built a sqlite database to publish with datasette using the built-in integration with zeit now now publishing complete lahman baseball database with datasette summary: the datasette api available at https://baseballdb.lawlesst.net now contains the full lahman baseball database. in a previous post, i described how i'm using datasette to publish a subset of the lahman baseball database at that time, i only published three of the tables available in the database i've since expanded that datasette api to include the complete baseball database. the process for this was quite straightforward i ran the mysql dump lahman helpfully provides through this mysql sqlite tool to provide an import file for sqlite importing into sqlite for publishing with datasette was as simple as: $ ./mysql sqlite lahman .sql | sqlite baseball.db the complete sqlite version of the lahman database is megabytes. querying with the full database now loaded, there are many more interesting queries that can be run publishing the lahman baseball database with datasette summary: publishing the lahman baseball database with datasette api available at https://baseballdb.lawlesst.net. for those of us interested in open data, an exciting new tool was released this month it's by simon willison and called datasette datasette allows you to very quickly convert csv files to a sqlite database and publish on the web with an api head over to simon's site for more details sparql to pandas dataframes update: see this python module for converting sparql query results into pandas dataframes. using pandas to explore data sparql pandas is a python based power tool for munging and analyzing data while working with data from sparql endpoints, you may prefer to explore and analyze it with pandas given its full feature set, strong documentation and large community of users. the code below is an example of issuing a query to the wikidata sparql endpoint and loading the data into a pandas dataframe and running basic operations on the returned data. this is a modified version of code from su labs here we remove the types returned by the sparql endpoint since they add noise and we will prefer to handle datatypes with pandas. {% notebook sparql_dataframe.ipynb %} with a few lines of code, we can connect data stored in sparql endpoints with pandas, the powerful python data munging and analysis library. see the su labs tutorial for more examples. you can also download the examples from this post as a jupyter notebook. querying wikidata to identify globally famous baseball players earlier this year i had the pleasure of attending a lecture by cesar hidalgo of mit's media lab one of the projects hidalgo discussed was pantheon pantheon is a website and dataset that ranks "globally famous individuals" based on a metric the team created called the historical popularity index (hpi) a key component of hpi is the number of wikipedia pages an individual has in in various languages for a complete description of the project, see: yu, a python etl and json-ld i've written an extension to petl, a python etl library, that applies json-ld contexts to data tables for transformation into rdf. the problem converting existing data to rdf, such as for vivo, often involves taking tabular data exported from a system of record, transforming or augmenting it in some way, and then mapping it to rdf for ingest into the platform the w c maintains an extensive list of tools designed to map tabular data to rdf. general purpose csv to rdf tools, however, almost always require some advanced preparation or cleaning of the data this means that developers and data wranglers often have to write custom code this code can quickly become verbose and difficult to maintain using an etl toolkit can help with this. etl with python one such etl tool that i'm having good results with is petl, python etl none metadata matters metadata matters it's all about the services it’s not just me that’s getting old having just celebrated (?) another birthday at the tail end of , the topics of age and change have been even more on my mind than usual. and then two events converged. first i had a chat with ted fons in a hallway at midwinter, and he asked about using an older article i’d published [&# ;] denying the non-english speaking world not long ago i encountered the analysis of bibframe published by rob sanderson with contributions by a group of well-known librarians. it’s a pretty impressive document&# ;well organized and clearly referenced. but in fact there’s also a significant amount of personal opinion in it, the nature of which is somewhat masked by the references to others [&# ;] review of: draft principles for evaluating metadata standards metadata standards is a huge topic and evaluation a difficult task, one i’ve been involved in for quite a while. so i was pretty excited when i saw the link for &# ;draft principles for evaluating metadata standards&# ;, but after reading it? not so much. if we’re talking about “principles” in the sense of ‘stating-the-obvious-as-a-first-step’, well, [&# ;] the jane-athons continue! the jane-athon series is alive, well, and expanding its original vision. i wrote about the first ‘official’ jane-athon earlier this year, after the first event at midwinter . since then the excitement generated at the first one has spawned others: the ag-athon in the uk in may , sponsored by cilip the maurice dance in [&# ;] separating ideology, politics and utility those of you who pay attention to politics (no matter where you are) are very likely to be shaking your head over candidates, results or policy. it’s a never ending source of frustration and/or entertainment here in the u.s., and i’ve noticed that the commentators seem to be focusing in on issues of ideology and [&# ;] semantic versioning and vocabularies a decade ago, when the open metadata registry (omr) was just being developed as the nsdl registry, the vocabulary world was a very different place than it is today. at that point we were tightly focussed on skos (not fully cooked at that point, but jon was on the wg that was developing it, so [&# ;] five star vocabulary use most of us in the library and cultural heritage communities interested in metadata are well aware of tim berners-lee’s five star ratings for linked open data (in fact, some of us actually have the mug). the five star rating for lod, intended to encourage us to follow five basic rules for linked data is useful, [&# ;] what do we mean when we talk about ‘meaning’? over the past weekend i participated in a twitter conversation on the topic of meaning, data, transformation and packaging. the conversation is too long to repost here, but looking from july - for @metadata_maven should pick most of it up. aside from my usual frustration at the message limitations in twitter, there seemed to be [&# ;] fresh from ala, what’s new? in the old days, when i was on marbi as liaison for aall, i used to write a fairly detailed report, and after that wrote it up for my cornell colleagues. the gist of those reports was to describe what happened, and if there might be implications to consider from the decisions. i don’t propose [&# ;] what’s up with this jane-athon stuff? the rda development team started talking about developing training for the ‘new’ rda, with a focus on the vocabularies, in the fall of . we had some notion of what we didn’t want to do: we didn’t want yet another ‘sage on the stage’ event, we wanted to re-purpose the ‘hackathon’ model from a software [&# ;] none none archivesblogs | a syndicated collection of blogs by and for archivists archivesblogs a syndicated collection of blogs by and for archivists search main menu skip to primary content skip to secondary content home about post navigation ← older posts meet ike posted on september , from aotus “i come from the very heart of america.” – dwight eisenhower, june , at a time when the world fought to overcome tyranny, he helped lead the course to victory as the supreme allied commander in europe. when our nation needed a leader, he upheld the torch of liberty as our th president. as a new memorial is unveiled, now is the time for us to meet dwight david eisenhower. eisenhower memorial statue and sculptures, photo by the dwight d. eisenhower memorial commission an opportunity to get to know this man can be found at the newly unveiled eisenhower memorial in washington, dc, and the all-new exhibits in the eisenhower presidential library and museum in abilene, kansas. each site in its own way tells the story of a humble man who grew up in small-town america and became the leader of the free world. the eisenhower presidential library and museum is a -acre campus which includes several buildings where visitors can interact with the life of this president. starting with the boyhood home, guests discover the early years of eisenhower as he avidly read history books, played sports, and learned lessons of faith and leadership. the library building houses the documents of his administration. with more than million pages and , images, researchers can explore the career of a +-year public servant. the , square feet of all-new exhibits located in the museum building is where visitors get to meet ike and mamie again…for the first time. using nara’s holdings, guests gain insight into the life and times of president eisenhower. finally, visitors can be reflective in the place of meditation where eisenhower rests beside his first-born son, doud, and his beloved wife mamie. a true encapsulation of his life. eisenhower presidential library and museum, abilene, kansas the updated gallery spaces were opened in . the exhibition includes many historic objects from our holdings which highlight eisenhower’s career through the military years and into the white house. showcased items include ike’s west point letterman’s sweater, the d-day planning table, soviet lunasphere, and letters related to the crisis at little rock. several new films and interactives have been added throughout the exhibit including a d-day film using newly digitized footage from the archives. eisenhower presidential library and museum, abilene, kansas in addition to facts and quotes, visitors will leave with an understanding of how his experiences made ike the perfect candidate for supreme allied commander of the allied expeditionary force in europe and the th president of the united states. the eisenhower memorial, which opened to the public on september , is located at an important historical corridor in washington, dc. the -acre urban memorial park is surrounded by four buildings housing institutions that were formed during the eisenhower administration and was designed by award-winning architect, frank gehry. in , the national archives hosted frank gehry and his collaborator, theater artist robert wilson in a discussion about the creation of the eisenhower national memorial.  as part of the creative process, gehry’s team visited the eisenhower presidential library and drew inspiration from the campus. they also used the holdings of the eisenhower presidential library to form the plans for the memorial itself. this also led to the development of online educational programs which will have a continued life through the eisenhower foundation. visitors to both sites will learn lasting lessons from president eisenhower’s life of public service. eisenhower memorial, photo by the dwight d. eisenhower memorial commission link to post | language: english the first post / phone-in: richard hake sitting-in for brian lehrer posted on september , from nypr archives & preservation on september , , the late richard hake sat-in for brian lehrer at columbia university’s new studios at wkcr.  just one week after the attack on the world trade center, wnyc was broadcasting on fm at reduced power from the empire state building and over wnye ( . fm). richard spoke with new york times columnist paul krugman on airport security, author james fallows on the airline industry, robert roach jr. of the international association of machinists, and security expert and former new york city police commissioner william bratton as well as wnyc listeners. link to post | language: english capturing virtual fsu posted on september , from illuminations when the world of fsu changed in march , the website for fsu was used as one of the primary communication tools to let students, faculty, and staff know what was going on. new webpages created specifically to share information and news popped up all over fsu.edu and we had no idea how long those pages would exist (ah, the hopeful days of march) so heritage & university archives wanted to be sure to capture those pages quickly and often as they changed and morphed into new online resources for the fsu community. screenshot of a capture of the main fsu news feed regarding coronavirus. captured march , . while fsu has had an archive-it account for a while, we hadn’t fully implemented its use yet. archive-it is a web archiving service that captures and preserves content on websites as well as allowing us to provide metadata and a public interface to viewing the collected webpages. covid- fast-tracked me on figuring out archive-it and how we could best use it to capture these unique webpages documenting fsu’s response to the pandemic. i worked to configure crawls of websites to capture the data we needed, set up a schedule that would be sufficient to capture changes but also not overwhelm our data allowance, and describe the sites being captured. it took me a few tries but we’ve successfully been capturing a set of covid related fsu urls since march. one of the challenges of this work was some of the webpages had functionality that the web crawling just wouldn’t capture. this was due to some interactive widgets on pages or potentially some css choices the crawler didn’t like. i decided the content was the most important thing to capture in this case, more so than making sure the webpage looked exactly like the original. a good example of this is the international programs alerts page. we’re capturing this to track information about our study abroad programs but what archive-it displays is quite different from the current site in terms of design. the content is all there though. on the left is how archive-it displays a capture of the international programs alerts page. on the right is how the site actually looks. while the content is the same, the formatting and design is not as the pandemic dragged on and it became clear that fall would be a unique semester, i added the online orientation site and the fall site to my collection line-up. the fall page, once used to track the re-opening plan recently morphed into the stay healthy fsu site where the community can look for current information and resources but also see the original re-opening document. we’ll continue crawling and archiving these pages in our fsu coronavirus archive for future researchers until they are retired and the university community returns to “normal” operations – whatever that might look like when we get there! link to post | language: english welcome to the new clintonlibrary.gov! posted on september , from aotus the national archives’ presidential libraries and museums preserve and provide access to the records of presidential administrations. in support of this mission, we developed an ongoing program to modernize the technologies and designs that support the user experience of our presidential library websites. through this program, we have updated the websites of the hoover, truman, eisenhower and nixon presidential libraries.  recently we launched an updated website for the william j. clinton presidential library & museum. the website, which received more than , visitors over the past year, now improves access to the clinton presidential library holdings by providing better performance, improving accessibility, and delivering a mobile-friendly experience. the updated website’s platform and design, based in the drupal web content management framework, enables the clinton presidential library staff to make increasing amounts of resources available online—especially while working remotely during the covid- crisis. to achieve this website redesign, staff from the national archives’ office of innovation, with both web development and user experience expertise, collaborated with staff from the clinton presidential library to define goals for the new website. our user experience team first launched the project by interviewing staff of the clinton presidential library to determine the necessary improvements for the updated website to facilitate their work. next, the user experience team researched the library’s customers—researchers, students, educators, and the general public—by analyzing user analytics, heatmaps, recordings of real users navigating the site, and top search referrals. based on the data collected, the user experience team produced wireframes and moodboards that informed the final site design. the team also refined the website’s information architecture to improve the user experience and meet the clinton library staff’s needs.  throughout the project, the team used agile project management development processes to deliver iterative changes focused on constant improvement. to be agile, specific goals were outlined, defined, and distributed among team members for mutual agreement. work on website designs and features was broken into development “sprints”—two-week periods to complete defined amounts of work. at the end of each development sprint, the resulting designs and features were demonstrated to the clinton presidential library staff stakeholders for feedback which helped further refine the website. the project to update the clinton presidential library and museum website was guided by the national archives’ strategic goals—to make access happen, connect with customers, maximize nara’s value to the nation, and build our future through our people. by understanding the needs of the clinton library’s online users and staff, and leveraging the in-house expertise of our web development and user experience staff, the national archives is providing an improved website experience for all visitors. please visit the site, and let us know what you think! link to post | language: english the road to edinburgh (part ) posted on september , from culture on campus “inevitably, official thoughts early turned to the time when scotland would be granted the honour of acting as hosts. thought was soon turned into action and resulted in scotland pursuing the opportunity to be host to the games more relentlessly than any other country has.” from foreword to the official history of the ixth commonwealth games ( ) in our last blog post we left the campaigners working to bring the commonwealth games to edinburgh reflecting on the loss of the games to kingston, jamaica. the original plan of action sketched out by willie carmichael in had factored in a renewed campaign for if the initial approach to host the games proved unsuccessful. the choice of host cities for the games were made at the bi-annual general assemblies of the commonwealth games federation. the campaign to choose the host for began at a meeting held in tokyo in (to coincide with the olympics), with the final vote taking place at the kingston games. in the edinburgh campaign presented a document to the federation restating its desire to be host city for the games in . entitled ‘scotland invites’ it laid out scotland’s case: “we are founder members of the federation; we have taken part in each games since the inception in ; and we are the only one of six countries who have taken part in every games, who have not yet had the honour of celebrating the games.” from scotland invites, british empire and commonwealth games council for scotland ( ) documents supporting edinburgh’s bid to host the commonwealth games presented to meetings of the general assembly of the commonwealth games federation at tokyo in and kingston in (ref. wc/ / / ) edinburgh faced a rival bid from christchurch, new zealand, the competition between the two cities recorded in a series of press cutting files collected by willie carmichael. reports in the scottish press presented edinburgh as the favourites for , with christchurch using their bid as a rehearsal for a more serious campaign to host the competition. however, the new zealanders rejected this assessment, arguing that it was the turn of a country in the southern hemisphere to host the games. the games brought the final frantic round of lobbying and promotion for the rival bids as members of the commonwealth games federation gathered in kingston. the british empire and commonwealth games council for scotland presented a bid document entitled ‘scotland ’ which included detailed information on the venues and facilities to be provided for the competition along with a broader description of the city of edinburgh. artists impression of the new meadowbank athletics stadium, edinburgh (ref. wc/ / / / ) at the general assembly of the commonwealth games federation held in kingston, jamaica, on august the vote took place to decide the host of the games. edinburgh was chosen as host city by votes to . the edinburgh campaign team kept a souvenir of this important event. at the end of the meeting they collected together the evidence of their success and put it in an envelope marked ‘ballot cards – which recorded votes for scotland at kingston .’ the voting cards and envelope now sit in an administrative file which forms part of the commonwealth games scotland archive. voting card recording vote for scotland to host the commonwealth games (ref. cg/ / / / / ) link to post | language: english new ancient texts research guide posted on september , from illuminations “what are the oldest books you have?” is a common question posed to special collections & archives staff at strozier library. in fact, the oldest materials in the collection are not books at all but cuneiform tablets ranging in date from to bce ( - years old). these cuneiform tablets, along with papyrus fragments and ostraka comprise the ancient texts collection in special collections & archives. in an effort to enhance remote research opportunities for students to engage with the oldest materials housed in strozier library, a research guide to ancient texts at fsu libraries has been created by special collections & archives staff. ancient texts research guide the ancient texts at fsu libraries research guide provides links to finding aids with collections information, high-resolution photos of the objects in the digital library, and links to articles or books about the collections. research guides can be accessed through the tile, “research guides,” on the library’s main page. special collections & archives currently has research guides published that share information and resources on specific collections or subjects that can be accessed remotely. while direct access to physical collections is unavailable at this time due to covid- , we hope to resume in-person research when it is safe to do so, and special collections & archives is still available to assist you remotely with research and instruction. please get in touch with us via email at: lib-specialcollections@fsu.edu. for a full list of our remote services, please visit our services page. link to post | language: english ssci members embrace need for declassification reform, discuss pidb recommendations at senate hearing posted on september , from transforming classification the board would like to thank acting chairman marco rubio (r-fl), vice chairman mark warner (d-va), and members of the senate select committee on intelligence (ssci) for their invitation to testify yesterday (september , ) at the open hearing on “declassification policy and prospects for reform.”    at the hearing, pidb member john tierney responded to questions from committee members about recommendations in the pidb’s may report to the president. he stressed the need for modernizing information security systems and the critical importance of sustained leadership through a senior-level executive agent (ea) to oversee and implement meaningful reform. in addition to congressman tierney, greg koch, the acting director of information management in the office of the director of national intelligence (odni), testified in response to the ssci’s concerns about the urgent need to improve how the executive branch classifies and declassifies national security information. much of the discussion focused on the pidb recommendation that the president designate the odni as the ea to coordinate the application of information technology, including artificial intelligence and machine learning, to modernize classification and declassification across the executive branch. senator jerry moran (r-ks), and senator ron wyden (d-or), who is a member of the ssci, joined the hearing to discuss the bill they are cosponsoring to modernize declassification. their proposed “declassification reform act of ” aligns with the pidb report recommendations, including the recommendation to designate the odni as the ea for coordinating the required reforms. the board would like to thank senators moran and wyden for their continued support and attention to this crucial issue. modernizing the classification and declassification system is important for our st century national security and it is important for transparency and our democracy. video of the entire hearing is available to view at the ssci’s website, and from c-span.  the transcript of prepared testimony submitted to the ssci by mr. tierney is posted on the pidb website. link to post | language: english be connected, keep a stir diary posted on september , from culture on campus the new semester approaches and it’s going to be a bit different from what we’re used to here at the university of stirling. to help you with your mental health and wellbeing this semester, we’ve teamed up with the chaplaincy to provide students new and returning with a diary where you can keep your thoughts and feelings, process your new environment, record your joys and capture what the university was like for you in this unprecedented time. diaries will be stationed at the welcome lounges from th september and we encourage students to take one for their personal use. please be considerate of others and only take one diary each. inside each diary is a qr code which will take you to our project page where you can learn more about the project and where we will be creating an online resource for you to explore the amazing diaries that we keep in archives and special collections. we will be updating this page throughout semester with information from the archives and events for you to join. keep an eye out for #stirdiary on social media for all the updates! at the end of semester, you are able to donate your diary to the archive where it will sit with the university’s institutional records and form a truthful and creative account of what student life was like in . you absolutely don’t have to donate your diary if you don’t want to, the diary belongs to you and you can keep it, throw it away, donate it or anything else (wreck it?) as you like. if you would like to take part in the project but you have missed the welcome lounges, don’t worry! contact rosie on archives@stir.ac.uk or janet on janet.foggie @stir.ac.uk welcome to the university of stirling – pick a colour! link to post | language: english pidb member john tierney to support modernizing classification and declassification before the senate select committee on intelligence, tomorrow at : p.m., live on c-span posted on september , from transforming classification pidb member john tierney will testify at an open hearing on declassification policy and the prospects for reform, to be held by the senate select committee on intelligence (ssci) tomorrow, wednesday, september , , from : - : p.m. est. the hearing will be shown on the ssci’s website, and televised live on c-span.  ssci members senators ron wyden (d-or) and jerry moran (r-ks) have cosponsored the proposed “declassification reform act of ,” which aligns with recommendations of the pidb’s latest report to the president, a vision for the digital age: modernization of the u.s, national security classification and declassification system (may ). in an opinion-editorial appearing today on the website just security, senators wyden and moran present their case for legislative reform to address the challenges of outmoded systems for classification and declassification. at the hearing tomorrow, mr. tierney will discuss how the pidb recommendations present a vision for a uniform, integrated, and modernized security classification system that appropriately defends national security interests, instills confidence in the american people, and maintains sustainability in the digital environment. mr. greg koch, acting director of the information management office for the office of the director of national intelligence, will also testify at the hearing. the pidb welcomes the opportunity to speak before the ssci and looks forward to discussing the need for reform with the senators. after the hearing, the pidb will post a copy of mr. tierney’s prepared testimony on its website and on this blog. link to post | language: english wiki loves monuments – digital skills and exploring stirling posted on september , from culture on campus every year the wikimedia foundation runs wiki loves monuments – the world’s largest photo competition. throughout september there is a push to take good quality images of listed buildings and monuments and add them to wiki commons where they will be openly licensed and available for use across the world – they may end up featuring on wikipedia pages, on google, in research and presentations worldwide and will be entered into the uk competition where there are prizes to be had! below you’ll see a map covered in red and blue pins. these represent all of the listed buildings and monuments that are covered by the wiki loves monuments competition, blue pins are places that already have a photograph and red pins have no photograph at all. the aim of the campaign is to turn as many red pins blue as possible, greatly enhancing the amazing bank of open knowledge across the wikimedia platforms. the university of stirling sits within the black circle. the two big clusters of red pins on the map are stirling and bridge of allan – right on your doorstep! we encourage you to explore your local area. knowing your surroundings, finding hidden gems and learning about the history of the area will all help stirling feel like home to you, whether you’re a first year or returning student. look at all those red dots! of course, this year we must be cautious and safe while taking part in this campaign and you should follow social distancing rules and all government coronavirus guidelines, such as wearing facemasks where appropriate, while you are out taking photographs. we encourage you to walk to locations you wish to photograph, or use the nextbikes which are situated on campus and in stirling rather than take excessive public transport purely for the purposes of this project. walking and cycling will help you to get a better sense of where everything is in relation to where you live and keeping active is beneficial to your mental health and wellbeing. here are your nextbike points on campus where you can pick up a bike to use we hope you’ll join us for this campaign – we have a session planned for - pm on thursday th september on teams where we’ll tell you more about wiki loves monuments and show you how to upload your images. sign up to the session on eventbrite. if you cannot make our own university of stirling session then wikimedia uk have their own training session on the st september which you can join. please note that if you want your photographs to be considered for the competition prizes then they must be submitted before midnight on the th september. photographs in general can be added at any time so you can carry on exploring for as long as you like! finally, just to add a little incentive, this year we’re having a friendly competition between the university of stirling and the university of st andrews students to see who can make the most edits so come along to a training session, pick up some brilliant digital skills and let’s paint the town green! link to post | language: english what’s the tea? posted on september , from illuminations katie mccormick, associate dean (she/her/hers) for this post, i interviewed kate mccormick in order to get a better understanding of the dynamics of special collections & archives. katie is one of the associate deans and has been with sca for about nine years now (here’s a video of katie discussing some of our collections on c-span in !). as a vital part of the library, and our leader in special collections & archives, i wanted to get her opinion on how the division has progressed thus far and how they plan to continue to do so in regards to diversity and inclusion.  how would you describe fsu sca when you first started? “…people didn’t feel comfortable communicating [with each other]… there was one person who really wrote for the blog, and maybe it would happen once every couple of months. when i came on board, my general sense was that we were a department and a group of people with a lot of really great ideas and some fantastic materials, who had come a long way from where things has been, but who hadn’t gotten to a place to be able to organize to change more or to really work more as a team… we were definitely valued as (mostly) the fancy crown jewel group. really all that mattered was the stuff… it didn’t matter what we were doing with it.” how do you feel the lapse in communication affected diversity and inclusion? “while i don’t have any direct evidence that it excluded people or helped create an environment that was exclusive, i do know that even with our staff at the time, there were times where it contributed to hostilities, frustrations, an  environment where people didn’t feel able to speak or be comfortable in…everybody just wanted to be comfortable with the people who were just like them that it definitely created some potentially hostile environments. looking back, i recognize what a poor job we did, as a workplace and a community truly being inclusive, and not just in ways that are immediately visible.” how diverse was sca when you started?  “in special collections there was minimal diversity, certainly less than we have now… [for the libraries as a whole] as you go up in classification and pay, the diversity decreases. that was certainly true when i got here and that remains true.” how would you rank sca’s diversity and inclusion when you first started? “…squarely a , possibly in some arenas a . not nothing, but i feel like no one was really thinking of it.” and how would you describe it now? “maybe we’re approaching a , i feel like there’s been progress, but there’s still a long way to go in my opinion.” what are some ways we can start addressing these issues? what are some tangible ways you are planning to enact? “for me, some of the first places [is] forming the inclusive research services task force in special collections, pulling together a group to look at descriptive practices and applications, and what we’re doing with creating coordinated processing workflows. putting these issues on the table from the beginning is really important… right now because we’re primarily in an online environment, i think we have some time to negotiate and change our practices so when we are re-open to the public and people are physically coming in to the spaces, we have new forms, new trainings, people have gone through training that gives them a better sense of identity, communication, diversity.” after my conversation with katie, i feel optimistic about the direction we are heading in. knowing how open special collections & archives is about taking critique and trying to put it into action brought me comfort. i’m excited to see how these concerns are addressed and how the department will be putting dynamic inclusivity, one of florida state university’s core values, at the forefront of their practice. i would like to give a big thank you to katie mccormick for taking the time to do this post with me and for having these conversations! link to post | language: english friday art blog: terry frost posted on september , from culture on campus black and red on blue (screenprint, a/p, ) born in leamington spa, warwickshire, in , terry frost kbe ra did not become an artist until he was in his s. during world war ii, he served in france, the middle east and greece, before joining the commandos. while in crete in june he was captured and sent to various prisoner of war camps. as a prisoner at stalag  in bavaria, he met adrian heath who encouraged him to paint. after the war he attended camberwell school of art and the st. ives school of art and painted his first abstract work in . in he moved to newlyn and worked as an assistant to the sculptor barbara hepworth. he was joined there by roger hilton, where they began a collaboration in collage and construction techniques. in he put on his first exhibition in the usa, in new york, and there he met many of the american abstract expressionists, including marc rothko who became a great friend. terry frost’s career included teaching at the bath academy of art, serving as gregory fellow at the university of leeds, and also teaching at the cyprus college of art. he later became the artist in residence and professor of painting at the department of fine art of the university of reading. orange dusk (lithograph, / , ) frost was renowned for his use of the cornish light, colour and shape. he became a leading exponent of abstract art and a recognised figure of the british art establishment. these two prints were purchased in the early days of the art collection at the beginning of the s. terry frost married kathleen clarke in and they had six children, two of whom became artists, (and another, stephen frost, a comedian). his grandson luke frost, also an artist, is shown here, speaking about his grandfather. link to post | language: english pidb sets next virtual public meeting for october , posted on september , from transforming classification the public interest declassification board (pidb) has scheduled its next virtual public meeting for wednesday, october , , from : to : p.m.  at the meeting, pidb members will discuss their priorities for improving classification and declassification in the next months. they will also introduce former congressman trey gowdy, who was appointed on august , , to a three-year term on the pidb. a full agenda, as well as information on how to pre-register, and how to submit questions and comments to the pidb prior to the virtual meeting, will be posted soon to transforming classification. the pidb looks forward to your participation in continuing our public discussion of priorities for modernizing the classification system going forward. link to post | language: english digital collections updates posted on september , from unc greensboro digital collections so as we start a new academic year, we thought this would be a good time for an update on what we’ve been working on recently. digital collections migration: after more than a year’s delay, the migration of our collections into a new and more user-friendly (and mobile-friendly) platform driven by the islandora open-source content management system is in the home stretch. this has been a major undertaking and has given us the opportunity to reassess how our collections work. we hope to be live with the new platform in november. , items (over , digital images) have already been migrated. - projects: we’ve made significant progress on most of this year’s projects (see link for project descriptions), though many of these are currently not yet online pending our migration to the islandora platform: grant-funded projects: temple emanuel project: we are working with the public history department and a graduate student in that program. several hundred items have already been digitized and more work is being done. we are also exploring grant options with the temple to digitize more material. people not property: nc slave deeds project: we are in the final year of this project funded by the national archives and hope to have it online as part of the digital library on american slavery late next year. we are also exploring additional funding options to continue this work. women who answered the call: this project was funded by a clir recordings at risk grant. the fragile cassettes have been digitized and we are midway through the process of getting them online in the new platform. library-funded projects: poetas sin fronteras: poets without borders, the scrapbooks of dr. ramiro lagos: these items have been digitized and will go online when the new platform launches. north carolina runaway slaves ads project, phase : work continues on this ongoing project and over ads are now online. this second phase has involved both locating and digitizing/transcribing the ads, and we will soon triple the number of ads done in phase one. we are also working on tighter integration of this project into the digital library on american slavery. pride! of the community: this ongoing project stemmed from an neh grant two years ago and is growing to include numerous new oral history interviews and (just added) a project to digitize and display ads from lgbtq+ bars and other businesses in the triad during the s and s. we are also working with two public history students on contextual and interpretive projects based on the digital collection. faculty-involved projects: black lives matter collections: this is a community-based initiative to document the black lives matter movement and recent demonstrations and artwork in the area. faculty: dr. tara green (african america and diaspora studies);  stacey krim, erin lawrimore, dr. rhonda jones, david gwynn (university libraries). civil rights oral histories: this has become multiple projects. we are working with several faculty members in the media studies department to make these transcribed interviews available online. november is the target. faculty: matt barr, jenida chase, hassan pitts, and michael frierson (media studies); richard cox, erin lawrimore, david gwynn (university libraries). oral contraceptive ads: working with a faculty member and a student on this project, which may be online by the end of the year. faculty: dr. heather adams (english); david gwynn and richard cox (university libraries). well-crafted nc: work is ongoing and we are in the second year of a uncg p grant, working with a faculty member in eth bryan school and a brewer based in asheboro. faculty: erin lawrimore, richard cox, david gwynn (university libraries), dr. erick byrd (marketing, entrepreneurship, hospitality, and tourism) new projects taken on during the pandemic: city of greensboro scrapbooks: huge collection of scrapbooks from the greensboro urban development department dating back to the s. these items have been digitized and will go online when the new platform launches. negro health week pamphlets: s- s pamphlets published by the state of north carolina. these items are currently being digitized and will go online when the new platform launches. clara booth byrd collection: manuscript collection. these items are currently being digitized and will go online when the new platform launches. north carolina speaker ban collection: manuscript collection. these items are currently being digitized and will go online when the new platform launches. mary dail dixon papers: manuscript collection. these items are currently being digitized and will go online when the new platform launches. ruth wade hunter collection: manuscript collection. these items are currently being digitized and will go online when the new platform launches. projects on hold pending the pandemic: junior league of greensboro: much of this has already been digitized and will go online when the new platform launches. uncg graduate school bulletins: much of this has already been digitized and will go online when the new platform launches.  david gwynn (digitization coordinator, me) offers kudos to erica rau and kathy howard (digitization and metadata technicians); callie coward (special collections cataloging & digital projects library technician); charley birkner (technology support technician); and dr. brian robinson (fellow for digital curation and scholarship) for their great work in very surreal circumstances over the past six months. link to post | language: english correction: creative fellowship call for proposals posted on september , from notes for bibliophiles we have an update to our last post! we’re still accepting proposals for our creative fellowship… but we’ve decided to postpone both the fellowship and our annual exhibition & program series by six months due to the coronavirus. the annual exhibition will now open on october , (which is months away, but we’re still hard at work planning!). the new due date for fellowship proposals is april , . we’ve adjusted the timeline and due dates in the call for proposals accordingly. link to post | language: english on this day in the florida flambeau, friday, september , posted on september , from illuminations today in , a disgruntled reader sent in this letter to the editor of the flambeau. in it, the reader describes the outcome of a trial and the potential effects that outcome will have on the city of tallahassee. florida flambeau, september , it is such a beautifully written letter that i still can’t tell whether or not it’s satire. do you think the author is being serious or sarcastic? leave a comment below telling us what you think! link to post | language: english hartgrove, meriwether, and mattingly posted on september , from the consecrated eminence the past few months have been a challenging time for archivists everywhere as we adjust to doing our work remotely. fortunately, the materials available in amherst college digital collections enable us to continue doing much of our work. back in february, i posted about five black students from the s and s — black men of amherst, -  — and now we’re moving into the early th century. a small clue in the olio has revealed another black student that was not included in harold wade’s black men of amherst. robert sinclair hartgrove (ac ) was known to wade, as was robert mattingly (ac ), but we did not know about robert henry meriwether. these three appear to be the first black students to attend amherst in the twentieth century. robert sinclair hartgrove, class of the text next to hartgrove’s picture in the yearbook gives us a tiny glimpse into his time at amherst. the same yearbook shows hartgrove not just jollying the players, but playing second base for the freshman baseball team during the season. freshman baseball team, the reference to meriwether sent me to the amherst college biographical record, where i found robert henry meriwether listed as a member of the class of . a little digging into the college catalogs revealed that he belongs with the class of . college catalog, - hartgrove and meriwether are both listed as members of the freshman class in the - catalog. the catalog also notes that they were both from washington, dc and the biographical record indicates that they both prepped at howard university before coming to amherst. we find meriwether’s name in the catalog for - , but he did not “pull through” as the olio hopes hartgrove will; meriwether returned to howard university where he earned his llb in . hartgrove also became a lawyer, earning his jb from boston university in and spending most of his career in jersey city, nj. robert nicholas mattingly, class of mattingly was born in louisville, ky in and prepped for amherst at the m street school in washington, dc, which changed its name in to the dunbar school. matt randolph (ac ) wrote “remembering dunbar: amherst college and african-american education in washington, dc” for the book amherst in the world, which includes more details of mattingly’s life. the amherst college archives and special collections reading room is closed to on-site researchers. however, many of our regular services are available remotely, with some modifications. please read our services during covid- page for more information. contact us at archives@amherst.edu. link to post | language: english democratizing access to our records posted on september , from aotus the national archives has a big, hairy audacious strategic goal to provide public access to million digital copies of our records through our online catalog by fy . when we first announced this goal in , we had less than a million digital copies in the catalog and getting to million sounded to some like a fairy tale. the goal received a variety of reactions from people across the archival profession, our colleagues and our staff. some were excited to work on the effort and wanted particular sets of records to be first in line to scan. some laughed out loud at the sheer impossibility of it. some were angry and said it was a waste of time and money. others were fearful that digitizing the records could take their jobs away. we moved ahead. staff researched emerging technologies and tested them through pilots in order to increase our efficiency. we set up a room at our facilities in college park to transfer our digital copies from individual hard drives to new technology from amazon, known as snowballs. we worked on developing new partnership projects in order to get more records digitized. we streamlined the work in our internal digitization labs and we piloted digitization projects with staff in order to find new ways to get digital copies into the catalog. by , we had million in the catalog. we persisted. in , we added more digital objects, with their metadata, to the catalog in a single year than we had for the preceding decade of the project. late in , we surpassed a major milestone by having more than million digital copies of our records in the catalog. and yes, it has strained our technology. the catalog has developed growing pains, which we continue to monitor and mitigate. we also created new finding aids that focus on digital copies of our records that are now available online: see our record group explorer and our presidential library explorer. so now, anyone with a smart phone or access to a computer with wifi, can view at least some of the permanent records of the u.s. federal government without having to book a trip to washington, d.c. or one of our other facilities around the country. the descriptions of over % of our records are also available through the catalog, so even if you can’t see it immediately, you can know what records exist. and that is convenient for the millions of visitors we get each year to our website, even more so during the pandemic. national archives identifier we are well on our way to million digital copies in the catalog by fy . and yet, with over billion pages of records in our holdings, we know, we have only just begun. link to post | language: english lola hayes and “tone pictures of the negro in music” posted on august , from nypr archives & preservation lola wilson hayes ( - ) was a highly-regarded african-american mezzo-soprano, wnyc producer, and later, much sought after vocal teacher and coach. a boston native, hayes was a music graduate of radcliffe college and studied voice with frank bibb at baltimore’s peabody conservatory. she taught briefly at a black vocational boarding school in new jersey known as the ‘tuskeegee of the north'[ ] before embarking on a recital and show career which took her to europe and around the united states. during world war ii, she also made frequent appearances at the american theatre wing of the stage door canteen of new york and entertained troops at uso clubs and hospitals. headline from the new york age, august , , pg. . (wnyc archive collections) hayes also made time to produce a short but notable run of wnyc programs, which she hosted and performed on the home front. her november and december broadcasts were part of a rotating half-hour time slot designated for known recitalists. she shared the late weekday afternoon slot with sopranos marjorie hamill, pina la corte, jean carlton, elaine malbin, and the hungarian pianist arpád sándor. hayes’ series, tone pictures of the negro in music, sought to highlight african-american composers and was frequently referred to as the negro in music. the following outline of and broadcasts was pieced together from the wnyc masterwork bulletin program guide and period newspaper radio listings. details on the programs are sparse. we know that hayes’ last broadcast in featured the pianist william duncan allen ( - ) performing they led my lord away by roland hayes and good lord done been here by hall johnson, and a porgy and bess medley by george gershwin. excerpt from “behind the mike,” november/december , wnyc masterwork bulletin. (wnyc archive collections) the show was scheduled again in august as a -minute late tuesday afternoon program and in november that year as a half-hour wednesday evening broadcast. the august programs began with an interview of soprano abbie mitchell ( - ), the widow of composer and choral director will marion cook ( - ). the composer and arranger hall johnson ( - ) was her studio guest the following week. the third tuesday of the month featured pianist jonathan brice performing “songs of young contemporary negro composers,” and the august shows concluded with selections from porgy and bess and cameron jones. the november broadcasts focused on the work of william grant still, “the art songs, spirituals and street cries” of william lawrence, as well as the songs and spirituals of william rhodes, lyric soprano lillian evanti, and baritone harry t. burleigh. hayes also spent airtime on the work of neo-romantic composer and violinist clarence cameron white. the november th program considered “the musical setting of poems by langston hughes and reportedly included the bard himself. “langston hughes was guest of honor and punctuated his interview with a reading from his opera troubled island.”[ ] this was not the first time the poet’s work was the subject of hayes’ broadcast. below is a rare copy of her script from a program airing eight months earlier when she sat in for the regularly scheduled host, soprano marjorie hamill. the script for tone pictures of the negro in music hosted by lola hayes on march , . (image used with permission of van vecten trust and courtesy of the carl van vechten papers relating to african american arts and letters. james weldon johnson collection in the yale collection in the yale collection of american literature, beinecke rare book and manuscript library)[ ] it is unfortunate, but it appears there are no recordings of lola hayes’ wnyc program. we can’t say if that’s because they weren’t recorded or, if they were, the lacquer discs have not survived. we do know that world war ii-era transcription discs, in general, are less likely to have survived since most of them were cut on coated glass, rather than aluminum, to save vital metals for the war effort. after the war, hayes focused on voice teaching and coaching. her students included well-known performers like dorothy rudd moore, hilda harris, raoul abdul-rahim, carol brice, nadine brewer, elinor harper, lucia hawkins, and margaret tynes. she was the first african-american president of the new york singing teachers association (nysta), serving in that post from - . in her later years, she devoted much of her time to the lola wilson hayes vocal artists award, which gave substantial financial aid to young professional singers worldwide.[ ]  ___________________________________________________________ [ ] the manual training and industrial school for colored youth in bordentown, new jersey [ ] “the listening room,” the people’s voice, december , , pg. . the newspaper noted that the broadcast included hall johnson’s mother to son, cecil cohen’s death of an old seaman and florence price’s song to a dark virgin, all presumably sung by host, lola hayes.  troubled island is an opera set in haiti in . it was composed by william grant still with a libretto by langston hughes and verna arvey. [ ] page two of the script notes langston hughes’ grandmother was married to a veteran of the harper’s ferry raid led by abolitionist john brown. indeed, hughes’ grandmother’s first husband was lewis sheridan leary, who was one of brown’s raiders at harper’s ferry. for more on the story please see: a shawl from harper’s ferry. [ ] abdul, raoul, “winners of the lola hayes vocal scholarship and awards,” the new york amsterdam news, february , , pg. . special thanks to valeria martinez for research assistance.   link to post | language: english the road to edinburgh posted on august , from culture on campus on the th anniversary of the edinburgh commonwealth games newly catalogued collections trace the long road to the first games held in scotland. a handwritten note dated th april sits on the top of a file marked ‘scotland for host’. the document forms part of a series of files recording the planning, organisation and operation of the edinburgh commonwealth games, the first to be held in scotland. written by willie carmichael, a key figure in scotland’s games history, the note sets out his plans to secure the commonwealth games for scotland. he begins by noting that scotland’s intention to host the games was made at a meeting of commonwealth games federations at the melbourne olympic games. carmichael then proceeds to lay out the steps required to make scotland’s case to be the host of the games in or . willie carmichael the steps which carmichael traced out in his note can be followed through the official records and personal papers relating to the games held in the university archives. the recently catalogued administrative papers of commonwealth games scotland for the period provide a detailed account of the long process of planning for this major event, recording in particular the close collaboration with edinburgh corporation which was an essential element in securing the games for scotland (with major new venues being required for the city to host the event). further details and perspectives on the road to the games can be found in the personal papers of figures associated with commonwealth games scotland also held in the university archives including sir peter heatly and willie carmichael himself. the choice of host city for the games was to be made at a meeting held at the games in perth, australia. the first target on carmichael’s plan, the edinburgh campaign put forward its application as host city at a federation meeting held in rome in . a series of press cutting files collected by carmichael trace the campaigns progress from this initial declaration of intent through to the final decision made in perth. documents supporting edinburgh’s bid to host the commonwealth games presented to meetings of the commonwealth games federation in rome ( ) and perth ( ), part of the willie carmichael archive. edinburgh faced competition both within scotland, with the press reporting a rival bid from glasgow, and across the commonwealth, with other nations including jamaica, india and southern rhodesia expressing an interest in hosting the competition. when it came to the final decision in three cities remained in contention: edinburgh, kingston in jamaica, and salisbury in southern rhodesia. the first round of voting saw salisbury eliminated. in the subsequent head-to-head vote kingston was selected as host city for the games by the narrowest of margins ( votes to ). as carmichael had sketched out in his plan if edinburgh failed in its attempt to host the games it would have another opportunity to make its case to hold the event. carmichael and his colleagues travelled to kingston in confident of securing the support required to bring the games to scotland in . in our next blog we’ll look at how they succeeded in making the case for edinburgh. ‘scotland invites’, title page to document supporting edinburgh’s bid to host the commonwealth games (willie carmichael archive). link to post | language: english friday art blog: kate downie posted on august , from culture on campus nanbei by kate downie (oil on canvas, ) during a series of visits to china a few years ago, kate downie was brought into contact with traditional ink painting techniques, and also with the china of today. there she encountered the contrasts and meeting points between the epic industrial and epic romantic landscapes: the motorways, rivers, cityscapes and geology – all of which she absorbed and reflected on in a series of oil and ink paintings. as kate creates studies for her paintings in situ, she is very much immersed in the landscapes that she is responding to and reflecting on. the artwork shown above, ‘nanbei’, which was purchased by the art collection in , tackles similar themes to downie’s scottish based work, reflecting both her interest in the urban landscape and also the edges where land meets water. here we encounter both aspects within a new setting – an industrial chinese landscape set by the edge of a vast river. downie is also obsessed with bridges. as well as the bridge that appears in this image, seemingly supported by trees that follow its line, the space depicted forms an unseen bridge between two worlds and two extremes, between epic natural and epic industrial forms. in this imagined landscape, north meets south (nanbei literally means north south) and mountains meet skyscrapers; here both natural and industrial structures dominate the landscape. this juxtaposition is one of the aspects of china that impressed the artist and inspired the resulting work. after purchasing this work by kate downie, the art collection invited her to be one of three exhibiting artists in its exhibition ‘reflections of the east’ in (the other two artists were fanny lam christie and emma scott smith). all artists had links to china, and ‘nanbei’ was central to the display of works in the crush hall that kate had entitled ‘shared vision’. temple bridge (monoprint, ) kate downie studied fine art at gray’s school of art, aberdeen and has held artists’ residencies in the usa and europe. she has exhibited widely and has also taught and directed major art projects. in kate downie travelled to beijing and shanghai to work with ink painting masters and she has since returned there several times, slowly building a lasting relationship with chinese culture. on a recent visit she learned how to carve seals from soapstone, and these red stamps can now be seen on all of her work, including on her print ‘temple bridge’ above, which was purchased by the collection at the end of the exhibition. kate downie recently gave an interesting online talk about her work and life in lockdown. it was organised by the scottish gallery in edinburgh which is currently holding an exhibition entitled ‘modern masters women‘ featuring many women artists. watch kate downie’s talk below: link to post | language: english telling untold stories through the emmett till archives posted on august , from illuminations detail of a newspaper clipping from the joseph tobias papers, mss - friday august th marks the th anniversary of the abduction and murder of emmett till. till’s murder is regarded as a significant catalyst for the mid-century african-american civil rights movement. calls for justice for till still drive national conversations about racism and oppression in the united states. in , florida state university (fsu) libraries special collections & archives established the emmett till archives in collaboration with emmett till scholar davis houck, filmmaker keith beauchamp, and author devery anderson. since then, we have continued to build robust research collections of primary and secondary sources related to the life, murder, and commemoration of emmett till. we invite researchers from around the world, from any age group, to explore these collections and ask questions. it is through research and exploration of original, primary resources that till’s story can be best understood and that truth can be shared. “mamie had a little boy…”, from the wright family interview, keith beauchamp audiovisual recordings, mss - fsu special collections & archives. as noted in our emmett till birthday post this year, an interview with emmett till’s family, conducted by civil rights filmmaker keith beauchamp in , is now available through the fsu digital library in two parts. willie wright, thelma wright edwards, and wilma wright edwards were kind enough to share their perspectives with beauchamp and in a panel presentation at the fsu libraries heritage museum that spring. soon after this writing, original audio and video files from the interview will be also be available to any visitor, researcher, or aspiring documentary filmmaker through the fsu digital library. emmett till, december . image from the davis houck papers a presentation by a till scholar in led to renewed contact with and a valuable donation from fsu alum steve whitaker, who in a way was the earliest contributor to emmett till research at fsu. his seminal master’s thesis, completed right here at florida state university, is still the earliest known scholarly work on the kidnapping and murder of till, and was influential on many subsequent retellings of the story. the till archives recently received a few personal items from whitaker documenting life in mid-century mississippi, as well as a small library of books on till, mississippi law, and other topics that can give researchers valuable context for his thesis and the larger till story. in the future, the newly-founded emmett till lecture and archives fund will ensure further opportunities to commemorate till through events and collection development. fsu libraries will continue to partner with till’s family, the emmett till memory project, emmett till interpretive center, the emmett till project, the fsu civil rights institute, and other institutions and private donors to collect, preserve and provide access to the ongoing story of emmett till. sources and further reading fsu libraries. emmett till archives research guide. https://guides.lib.fsu.edu/till wright family interview, keith beauchamp audiovisual recordings, mss - , special collections & archives, florida state university, tallahassee, florida. interview part i: http://purl.flvc.org/fsu/fd/fsu_mss - _bd_ interview part ii: http://purl.flvc.org/fsu/fd/fsu_mss - _bd_ link to post | language: english former congressman trey gowdy appointed to the pidb posted on august , from transforming classification on august , , house minority leader kevin mccarthy (r-ca) appointed former congressman harold w. “trey” gowdy, iii as a member of the public interest declassification board. mr. gowdy served four terms in congress, representing his hometown of spartansburg in south carolina’s th congressional district. the board members and staff welcome mr. gowdy and look forward to working with him in continuing efforts to modernize and improve how the federal government classifies and declassifies sensitive information. mr. gowdy was appointed by the minority leader mccarthy on august , . he is serving his first three-year term on the board. his appointment was announced on august , in the congressional record https://www.congress.gov/ /crec/ / / /crec- - - -house.pdf link to post | language: english tracey sterne posted on august , from nypr archives & preservation in november of , an item appeared in the new york times -and it seemed all of us in new york (and elsewhere) who were interested in music, radio, and culture in general, saw it:  “teresa sterne,” it read, “who in years helped build the nonesuch record label into one of the most distinguished and innovative in the recording industry, will be named director of music programming at wnyc radio next month.” the piece went on to promise that ms. sterne, under wnyc’s management, would be creating “new kinds of programming -including some innovative approaches to new music and a series of live music programs.”  this was incredible news. sterne, by this time, was a true cultural legend. she was known not only for those years she’d spent building nonesuch, a remarkably smart, serious, and daring record label —but also for how it had all ended, with her sudden dismissal from that label by elektra, its parent company (whose own parent company was warner communications), two years earlier. the widely publicized outrage over her termination from nonesuch included passionate letters of protest from the likes of leonard bernstein, elliott carter, aaron copland —only the alphabetical beginning of a long list of notable musicians, critics and journalists who saw her firing as a sharp blow to excellence and diversity in music. but the dismissal stood.  by coincidence, only three weeks before the news of her hiring broke, i had applied for a job as a part-time music-host at wnyc. steve post, a colleague whom i’d met while doing some producing and on-air work at new york’s decidedly non-profit pacifica station, wbai, had come over from there to wnyc, a year before, to do the weekday morning music and news program. “fishko,” he said to me, “they need someone on the weekends -and i think they want a woman.” my day job of longstanding was as a freelance film editor, but i wanted to keep my hand in the radio world. weekends would be perfect. in two interviews with executives at wnyc, i had failed to impress. but now i could feel hopeful about making a connection to ms. sterne, who was a music person, as was i.  soon after her tenure began, i threw together a sample tape and got it to her through a contact on the inside. and she said, simply: yeah, let’s give her a chance. and so it began.  tracey—the name she was called by all friends and colleagues — seemed, immediately, to be a fascinating, controversial character: she was uniquely qualified to do the work at hand, but at the same time she was a fish out of water. she was un-corporate, not inclined to be polite to the young executives upstairs, and not at all enamored of current trends or audience research. for this we dearly loved her, those of us on the air. she cared how the station sounded, how the music connected, how the information about the music surrounded it. her preoccupations seemed, even then, to be of the old school. but she was also fiercely modern in her attitude toward the music, unafraid to mix styles and periods, admiring of new music, up on every instrumentalist and conductor and composer, young, old, avant-garde, traditional. and she had her own emphatic and impeccable taste. always the best, that was her motto —whatever it is, if it’s great, or even just extremely good, it will distinguish itself and find its audience, she felt.  tracey sterne, age , rehearsing for a tchaikovsky concerto performance at wnyc in march . (finkelstein/wnyc archive collections) she had developed her ear and her convictions, as it turned out, as a musician, having been a piano prodigy who performed at madison square garden at age . she went on to a debut with the new york philharmonic, gave concerts at lewisohn stadium and the brooklyn museum, and so on. i could relate. though my gifts were not nearly at her level, i, too, had been a dedicated, early pianist and i, too, had looked later for other ways to use what i’d learned at the piano keyboard. and our birthdays were on the same date in march. so, despite being at least a couple of decades apart in age, we bonded.  tracey’s tenure at wnyc was fruitful, though not long. as she had at nonesuch, she embraced ambitious and adventurous music programming. she encouraged some of the on-air personalities to express themselves about the music, to “personalize” the air, to some degree. that was also happening in special programs launched shortly before she arrived as part of a new music initiative, with john schaefer and tim page presenting a range of music way beyond the standard classical fare. and because of tracey’s deep history and contacts in the new york music business, she forged partnerships with music institutions and found ways to work live performances by individual musicians and chamber groups into the programming. she helped me carve out a segment on air for something we called great collaborations, a simple and very flexible idea of hers that spread out to every area of music and made a nice framework for some observations about musical style and history. she loved to talk (sometimes to a fault) and brainstorm about ways to enliven the idea of classical music on the radio, not something all that many people were thinking about, then.  but management found her difficult, slow and entirely too perfectionistic. she found management difficult, slow and entirely too superficial. and after a short time, maybe a year, she packed up her sneakers —essential for navigating the unforgiving marble floors in that old place— and left the long, dusty hallways of the municipal building.  after that, i occasionally visited tracey’s house in brooklyn for events which i can only refer to as “musicales.” her residence was on the upper west side, but this family house was treated as a country place, she’d go on the weekends. she’d have people over, they’d play piano, and sing, and it might be william bolcom and joan morris, or some other notables, spending a musical and social afternoon. later, she and i produced a big, new york concert together for the th birthday of domenico scarlatti –which exact date fell on a saturday in . “scarlatti saturday,” we called it, with endless phone-calling, musician-wrangling and fundraising needed for months to get it off the ground.  the concert itself, much of which was also broadcast on wnyc, went on for many hours, with appearances by some of the finest pianists and harpsichordists in town and out, lines all up and down broadway to get into symphony space.  throughout, tracey was her incorruptible self — and a brilliant organizer, writer, thinker, planner, and impossibly driven producing-partner.  i should make clear, however, that for all her knowledge and perfectionistic, obsessive behavior, she was never the cliche of the driven, lonely careerist -or whatever other cliche you might want to choose. she was a warm, haimish person with friends all over the world, friends made mostly through music. a case in point: the “scarlatti saturday” event was produced by the two of us on a shoestring. and tracey, being tracey, she insisted that we provide full musical and performance information in printed programs, offered free to all audience members, and of course accurate to the last comma. how to assure this? she quite naturally charmed and befriended the printer — who wound up practically donating the costly programs to the event. by the time we were finished she was making him batches of her famous rum balls and he was giving us additional, corrected pages —at no extra charge. it was not a calculated maneuver -it was just how she did things.  you just had to love and respect her for the life force, the intelligence, the excellence and even the temperament she displayed at every turn. sometimes even now, after her death many years ago at from als, i still feel tracey sterne’s high standards hanging over me —in the friendliest possible way. ___________________________________________ sara fishko hosts wnyc’s culture series, fishko files. link to post | language: english heroes work here posted on august , from aotus the national archives is home to an abundance of remarkable records that chronicle and celebrate the rich history of our nation. it is a privilege to be archivist of the united states—to be the custodian of our most treasured documents and the head of an agency with such a unique and rewarding mission. but it is my greatest privilege to work with such an accomplished and dedicated staff—the real treasures of the national archives go home at night. today i want to recognize and thank the mission-essential staff of nara’s national personnel records center (nprc). like all nara offices, the nprc closed in late march to protect its workforce and patrons from the spread of the pandemic and comply with local government movement orders. while modern military records are available electronically and can be referenced remotely, the majority of nprc’s holdings and reference activity involve paper records that can be accessed only by on-site staff. furthermore, these records are often needed to support veterans and their families with urgent matters such as medical emergencies, homeless veterans seeking shelter, and funeral services for deceased veterans. concerned about the impact a disruption in service would have on veterans and their families, over staff voluntarily set aside concerns for their personal welfare and regularly reported to the office throughout the period of closure to respond to these types of urgent requests. these exceptional staff were pioneers in the development of alternative work processes to incorporate social distancing and other protective measures to ensure a safe work environment while providing this critical service. national personnel records center (nprc) building in st. louis the center is now in phase one of a gradual re-opening, allowing for additional on-site staff.  the same group that stepped up during the period of closure continues to report to the office and are now joined by additional staff volunteers, enabling them to also respond to requests supporting employment opportunities and home loan guaranty benefits. there are now over staff supporting on-site reference services on a rotational basis. together they have responded to over , requests since the facility closed in late march. more than half of these requests supported funeral honors for deceased veterans. with each passing day we are a day closer to the pandemic being behind us. though it may seem far off, there will come a time when covid- is no longer the threat that it is today, and the pandemic of will be discussed in the context of history. when that time comes, the mission essential staff of nprc will be able to look back with pride and know that during this unprecedented crisis, when their country most needed them, they looked beyond their personal well-being to serve others in the best way they were able. as archivist of the united states, i applaud you for your commitment to the important work of the national archives, and as a navy veteran whose service records are held at nprc, i thank you for your unwavering support to america’s veterans. link to post | language: english contribute to the fsu community covid project posted on august , from illuminations masks sign, contributed by lorraine mon, view this item in the digital library here students, faculty, and alumni! heritage & university archives is collecting stories and experiences from the fsu community during covid- . university life during a pandemic will be studied by future scholars. during this pandemic, we have received requests surrounding the flu pandemic. unfortunately, not many documents describing these experiences survive in the archive.  to create a rich record of life in these unique times we are asking the fsu community to contribute their thoughts, experiences, plans, and photographs to the archive. working from home, contributed by shaundra lee, view this time in the digital library here how did covid- affect your summer? tell us about your plans for fall. how did covid- change your plans for classes? upload photographs of your dorm rooms or your work from home set ups. if you’d like to see examples of what people have already contributed, please see the collection on diginole. you can add your story to the project here. link to post | language: english creative fellowship – call for proposals posted on august , from notes for bibliophiles ppl is now accepting proposals for our creative fellowship! we’re looking for an artist working in illustration or two-dimensional artwork to create new work related to the theme of our exhibition, tomboys. view the full call for proposals, including application instructions, here. the application deadline is october , april , *. *this deadline has shifted since we originally posted this call for proposals! the fellowship, and the exhibition & program series, have both been shifted forward by six months due to the coronavirus. updated deadlines and timeline in the call for proposals! link to post | language: english friday art blog: still life in the collection posted on august , from culture on campus welcome to our new regular blog slot, the ‘friday art blog’. we look forward to your continued company over the next weeks and months. you can return to the art collection website here, and search our entire permanent collection here. pears by jack knox (oil on board, ) this week we are taking a look at some of the still life works of art in the permanent collection. ‘still life’ (or ‘nature morte’ as it is also widely known) refers to the depiction of mostly inanimate subject matter. it has been a part of art from the very earliest days, from thousands of years ago in ancient egypt, found also on the walls in st century pompeii, and featured in illuminated medieval manuscripts. during the renaissance, when it began to gain recognition as a genre in its own right, it was adapted for religious purposes. dutch golden age artists in particular, in the early th century, depicted objects which had a symbolic significance. the still life became a moralising meditation on the brevity of life. and the vanity of the acquisition of possessions. but, with urbanization and the rise of a middle class with money to spend, it also became fashionable simply as a celebration of those possessions – in paintings of rare flowers or sumptuous food-laden table tops with expensive silverware and the best china. the still life has remained a popular feature through many modern art movements. artists might use it as an exercise in technique (much cheaper than a live model), as a study in colour, form, or light and shade, or as a meditation in order to express a deeper mood. or indeed all of these. the works collected by the university of stirling art collection over the past fifty years reflect its continuing popularity amongst artists and art connoisseurs alike. bouteille et fruits by henri hayden (lilthograph, / , ) in the modern era the still life featured in the post impressionist art of van gogh, cezanne and picasso. henri hayden trained in warsaw, but moved to paris in where cezanne and cubism were influences. from he rejected this aesthetic and developed a more figurative manner, but later in life there were signs of a return to a sub-cubist mannerism in his work, and as a result the landscapes and still lifes of his last years became both more simplified and more definitely composed than the previous period, with an elegant calligraphy. they combine a new richness of colour with lyrical melancholy. meditation and purity of vision mark the painter’s last years. black lace by anne redpath (gouache, ) anne redpath is best known for her still lifes and interiors, often with added textural interest, and also with the slightly forward-tilted table top, of which this painting is a good example. although this work is largely monochrome it retains the fascination the artist had in fabric and textiles – the depiction of the lace is enhanced by the restrained palette. untitled still life by euan heng (linocut, / , ) while euan heng’s work is contemporary in practice his imagery is not always contemporary in origin. he has long been influenced by italian iconography, medieval paintings and frescoes. origin of a rose by ceri richards (lithograph, / , ) in ceri richards’ work there is a constant recurrence of visual symbols and motifs always associated with the mythic cycles of nature and life. these symbols include rock formations, plant forms, sun, moon and seed-pods, leaf and flower. these themes refer to the cycle of human life and its transience within the landscape of earth. still life, summer by elizabeth blackadder (oil on canvas, ) this is a typical example of one of elizabeth blackadder’s ‘flattened’ still life paintings, with no perspective. works such as this retain the form of the table, with the top raised to give the fullest view. broken cast by david donaldson (oil on canvas , ) david donaldson was well known for his still lifes and landscape paintings as well as literary, biblical and allegorical subjects. flowers for fanny by william mactaggart oil on board, william mactaggart typically painted landscapes, seascapes and still lifes featuring vases of flowers. these flowers, for his wife, fanny aavatsmark, are unusual for not being poppies, his most commonly painted flower. cake by fiona watson (digital print, / , ) we end this blog post with one of the most popular still lifes in the collection. this depiction of scottish classic the tunnock’s teacake is a modern take on the still life. it is a firm favourite whenever it is on display. image by julie howden link to post | language: english solar energy: a brief look back posted on august , from illuminations in the early ’s the united states was in the midst of an energy crisis. massive oil shortages and high prices made it clear that alternative ideas for energy production were needed and solar power was a clear front runner. the origins of the solar cell in the united states date back to inventor charles fritz in the ’s, and the first attempts at harvesting solar energy for homes, to the late ’s. in , the state of florida put it’s name in the ring to become the host of the national solar energy research institute. site proposal for the national solar energy research institute. claude pepper papers s. b. f. with potential build sites in miami and cape canaveral, the latter possessing the added benefit of proximity to nasa, the florida solar energy task force, led by robert nabors and endorsed by representative pepper, felt confident. the state made it to the final rounds of the search before the final location of golden, colorado was settled upon, which would open in . around this same time however ( ), the florida solar energy center was established at the university of central florida. the claude pepper papers contain a wealth of information on florida’s efforts in the solar energy arena from the onset of the energy crisis, to the late ’s. carbon copy of correspondence between claude pepper and robert l. nabors regarding the cape canaveral proposed site for the national solar research institute. claude pepper papers s. b. f. earlier this year, “tallahassee solar ii”, a new solar energy farm, began operating in florida’s capitol city.  located near the tallahassee international airport, it provides electricity for more than , homes in the leon county area. with the steady gains that the state of florida continues to make in the area of solar energy expansion, it gets closer to fully realizing its nickname, “the sunshine state.” link to post | language: english (c)istory lesson posted on august , from illuminations our next submission is from rachel duke, our rare books librarian, who has been with special collections for two years. this project was primarily geared towards full-time faculty and staff, so i chose to highlight her contribution to see what a full-time faculty’s experience would be like looking through the catalog. frontispiece and title page, salome, . image from https://collection.cooperhewitt.org/objects/ / the item she chose was salome, originally written in french by oscar wilde, then translated into english, as her object. while this book does not explicitly identify as a “queer text,” wilde has become canonized in queer historical literature. in the first edition of the book, there is even a dedication to his lover, lord alfred bruce douglas, who helped with the translation. while there are documented historical examples of what we would refer to today as “queerness,” (queer meaning non-straight) there is still no demarcation of his queerness anywhere in the catalog record. although the author is not necessarily unpacking his own queer experiences in the text, “both [salome’s] author and its legacy participate strongly in queer history” as duke states in her submission.  oscar wilde and lord alfred bruce douglas even though wilde was in a queer relationship with lord alfred bruce douglas, and has been accepted into the queer canon, why doesn’t his catalog record reflect that history? well, a few factors come into play. one of the main ones is an aversion to retroactively labeling historical figures. since we cannot confirm which modern label would fit wilde, we can’t necessarily outright label him as gay. how would a queer researcher like me go about finding authors and artists from the past who are connected with queer history? it is important to acknowledge lgbtq+ erasure when discussing this topic. since the lgbtq+ community has historically been marginalized, documentation of queerness is hard to come by because: people did not collect, and even actively erased, queer and trans histories. lgbtq+ history has been passed down primarily as an oral tradition.  historically, we cannot confirm which labels people would have identified with. language and social conventions change over time. so while we view and know someone to be queer, since it is not in official documentation we have no “proof.” on the other hand, in some cultures, gay relations were socially acceptable. for example, in the middle ages, there was a legislatively approved form of same-sex marriage, known as affrèrement. this example is clearly labeled as *gay* in related library-based description because it was codified that way in the historical record. by contrast, shakespeare’s sonnets, which (arguably) use queer motifs and themes, are not labeled as “queer” or “gay.” does queer content mean we retroactively label the author queer? does the implication of queerness mean we should make the text discoverable under queer search terms? cartoon depicting oscar wilde’s visit to san francisco. by george frederick keller – the wasp, march , . personally, i see both sides. as someone who is queer, i would not want a random person trying to retroactively label me as something i don’t identify with. on the other hand, as a queer researcher, i find it vital to have access to that information. although they might not have been seen as queer in their time period, their experiences speak to queer history. identities and people will change, which is completely normal, but as a group that has experienced erasure of their history, it is important to acknowledge all examples of historical queerness as a proof that lgbtq+ individuals have existed throughout time. how do we responsibly and ethically go about making historical queerness discoverable in our finding aids and catalogs? click here to see some more historical figures you might not have known were lgbtq+. link to post | language: english post navigation ← older posts about archivesblogs archivesblogs syndicates content from weblogs about archives and archival issues and then makes the content available in a central location in a variety of formats.more info.   languages deutsch english español français italiano nederlands nihongo (日本語) العربية syndicated blogs ????????? blog? a lively experiment a repository for bottled monsters a view to hugh academic health center archives adventures in records management african american studies at beinecke library annotations: the neh preservation project aotus archaeology archives oxford archivagando archival science / ??? ??????? archivalia archiveros españoles en la función pública (aefp) archives and auteurs archives and special collections archives d’assy archives forum archives gig archives hub blog archives outside archives, records and artefacts archivesinfo archivesnext archivistica e dintorni archivium sancti iacobi archivólogo – blog de archivo – lic. carmen marín archivónomo.bit arkivformidling around the d authenticity beaver archivist blog bloggers@brooklynmuseum » libraries & archives bogdan's archival blog — blog de arhivist born digital archives (aims project) brandeis special collections spotlight calames – le blog consultores documentales cultural compass culture on campus daily searchivist de digitale archivaris depotdrengen digital library of georgia digitization discontents dub collections endangered archives blog ephemeral archives f&m archives & special collections fil d'ariane frei – geschichtspuls fresh pickin's futurearch, or the future of archives… hanging together helen morgan historical notes illuminations in the mailbox inside the chs inside the gates keeping time l’affaire makropoulos l’archivista la tribune des archives livejournal archivists lsu libraries special collections blog m.e. grenander department of special collections and archives mit libraries news » archives + mit history modern books and manuscripts mudd manuscript library blog national union of women teachers nc miscellany nccdhistory new archivist new york state archives news and events news – litwin books & library juice press notes for bibliophiles o arquivista old things with stories open beelden order from chaos out of the box out of the box pacific northwest features paulingblog peeling back the bark poetry at beinecke library posts on mark a. matienzo practical archivist practical e-records presbyterian research ratilburg readyresources reclamation & representation records management futurewatch records mgmt & archiving richard b. russell library for political research and studies room cabinet of curiosities sdsu special collections: new acquisitions, events, and highlights from our collections special collections blog special collections – the university of chicago library news special collections – uga libraries news & events special collections – utc library spellbound blog stacked five high state library of massachusetts state records office of western australia the anarchivist the autry blog the back table the butler center for arkansas studies the charleston archive the consecrated eminence the devil's tale the last campaign the legacy center the posterity project the quantum archivist the top shelf the visible archive touchable archives transforming classification trinity university special collections and archives twin cities archives round table unc greensboro digital collections vault vpro radio archief webarchivists webarchivists (fr) what the fonds? what's cool at hoole what’s on the th floor? wnyc archives & preservation you ought to be ashamed proudly powered by wordpress none none none conal tuohy's blog – the blog of a digital humanities software developer skip to content conal tuohy's blog the blog of a digital humanities software developer menu and widgets about me tags archives cidoc-crm data mining distant reading joai json linked data lodlam lodlive museum newspapers oai-pmh oceania.digital papers past proxy rest retailer skos sparql transcription trove twitter visualization vocabularies web api xforms xml xpath xproc xproc-z zotero twitter my tweets recent posts analysis & policy online a tool for web api harvesting oceania australian society of archivists conference #asalinks linked open data visualisation at #glamvr top posts & pages oceania all my posts june december october august april november october september august july may april march december november september august march december log in analysis & policy online notes for my open repositories conference presentation. i will edit this post later to flesh it out into a proper blog post. follow along at: conaltuohy.com/blog/analysis-policy-online/ continue reading analysis & policy online posted on june , june , tags data mining, linked data, lodlam, oai-pmh, sparql, xml, xproc, xproc-zleave a comment on analysis & policy online a tool for web api harvesting a medieval man harvesting metadata from a medieval web api as stumbles to an end, i’ve put in a few days’ work on my new project oceania, which is to be a linked data service for cultural heritage in this part of the world. part of this project involves harvesting data from cultural institutions which make their collections available via so-called “web apis”. there are some very standard ways to publish data, such as oai-pmh, opensearch, sru, rss, etc, but many cultural heritage institutions instead offer custom-built apis that work in their own peculiar way, which means that you need to put in a certain amount of effort in learning each api and dealing with its specific requirements. so i’ve turned to the problem of how to deal with these apis in the most generic way possible, and written a program that can handle a lot of what is common in most web apis, and can be easily configured to understand the specifics of particular apis. continue reading a tool for web api harvesting posted on december , december , tags oceania.digital, rest, trove, web api, xml, xpathleave a comment on a tool for web api harvesting oceania i am really excited to have begun my latest project: a linked open data service for online cultural heritage from new zealand and australia, and eventually, i hope, from our other neighbours. i have called the service “oceania.digital” the big idea of oceania.digital is to pull together threads from a number of different “cultural” data sources and weave them together into a single web of data which people can use to tell a huge number of stories. there are a number of different aspects to the project, and a corresponding number of stages to go through… continue reading oceania posted on december , december , tags linked data, lodlam, oceania.digital comment on oceania australian society of archivists conference #asalinks last week i participated in the conference of the australian society of archivists, in parramatta. #asalinks poster i was very impressed by the programme and the discussion. i thought i’d jot down a few notes here about just a few of the presentations that were most closely related to my own work. the presentations were all recorded, and as the asa’s youtube channel is updated with newly edited videos, i’ll be editing this post to include those videos. continue reading australian society of archivists conference #asalinks posted on october , december , tags archives, data mining, linked data, lodlam, museum, transcription, visualizationleave a comment on australian society of archivists conference #asalinks linked open data visualisation at #glamvr on thursday last week i flew to perth, in western australia, to speak at an event at curtin university on visualisation of cultural heritage. erik champion, professor of cultural visualisation, who organised the event, had asked me to talk about digital heritage collections and linked open data (“lod”). the one-day event was entitled “glam vr: talks on digital heritage, scholarly making & experiential media”, and combined presentations and workshops on cultural heritage data (glam = galleries, libraries, archives, and museums) with advanced visualisation technology (vr = virtual reality). the venue was the curtin hive (hub for immersive visualisation and eresearch); a really impressive visualisation facility at curtin university, with huge screens and panoramic and d displays. there were about people in attendance, and there would have been over a dozen different presenters, covering a lot of different topics, though with common threads linking them together. i really enjoyed the experience, and learned a lot. i won’t go into the detail of the other presentations, here, but quite a few people were live-tweeting, and i’ve collected most of the twitter stream from the day into a storify story, which is well worth a read and following up. continue reading linked open data visualisation at #glamvr posted on august , september , tags linked data, lodlam, lodlive, sparql, visualization comment on linked open data visualisation at #glamvr visualizing government archives through linked data tonight i’m knocking back a gin and tonic to celebrate finishing a piece of software development for my client the public record office victoria; the archives of the government of the australian state of victoria. the work, which will go live in a couple of weeks, was an update to a browser-based visualization tool which we first set up last year. in response to user testing, we made some changes to improve the visualization’s usability. it certainly looks a lot clearer than it did, and the addition of some online help makes it a bit more accessible for first-time users. the visualization now looks like this (here showing the entire dataset, unfiltered, which is not actually that useful, though it is quite pretty): continue reading visualizing government archives through linked data posted on april , september , tags archives, cidoc-crm, linked data, lodlam, oai-pmh, sparql, visualization comments on visualizing government archives through linked data taking control of an uncontrolled vocabulary a couple of days ago, dan mccreary tweeted: working on new ideas for nosql metadata management for a talk next week. focus on #nosql, documents, graphs and #skos. any suggestions? — dan mccreary (@dmccreary) november , it reminded me of some work i had done a couple of years ago for a project which was at the time based on linked data, but which later switched away from that platform, leaving various bits of rdf-based work orphaned. one particular piece which sprung to mind was a tool for dealing with vocabularies. whether it’s useful for dan’s talk i don’t know, but i thought i would dig it out and blog a little about it in case it’s of interest more generally to people working in linked open data in libraries, archives and museums (lodlam). continue reading taking control of an uncontrolled vocabulary posted on november , november , tags lodlam, skos, sparql, vocabularies, xforms comment on taking control of an uncontrolled vocabulary bridging the conceptual gap: museum victoria’s collections api and the cidoc conceptual reference model a museum victoria lod graph about a teacup, shown using the lodlive visualizer.this is the third in a series of posts about an experimental linked open data (lod) publication based on the web api of museum victoria. the first post gave an introduction and overview of the architecture of the publication software, and the second dealt quite specifically with how names and identifiers work in the lod publication software. in this post i’ll cover how the publication software takes the data published by museum victoria’s api and reshapes it to fit a common conceptual model for museum data, the “conceptual reference model” published by the documentation committee of the internal council of museums. i’m not going to exhaustively describe the translation process (you can read the source code if you want the full story), but i’ll include examples to illustrate the typical issues that arise in such a translation. continue reading bridging the conceptual gap: museum victoria’s collections api and the cidoc conceptual reference model posted on october , june , tags cidoc-crm, linked data, lodlam, lodlive, museum, web api comments on bridging the conceptual gap: museum victoria’s collections api and the cidoc conceptual reference model names in the museum my last blog post described an experimental linked open data service i created, underpinned by museum victoria’s collection api. mainly, i described the lod service’s general framework, and explained how it worked in terms of data flow. to recap briefly, the lod service receives a request from a browser and in turn translates that request into one or more requests to the museum victoria api, interprets the result in terms of the cidoc crm, and returns the result to the browser. the lod service does not have any data storage of its own; it’s purely an intermediary or proxy, like one of those real-time interpreters at the united nations. i call this technique a “linked data proxy”. i have a couple more blog posts to write about the experience. in this post, i’m going to write about how the linked data proxy deals with the issue of naming the various things which the museum’s database contains. continue reading names in the museum posted on october , april , tags cidoc-crm, linked data, lodlam, museum comment on names in the museum linked open data built from a custom web api i’ve spent a bit of time just recently poking at the new web api of museum victoria collections, and making a linked open data service based on their api. i’m writing this up as an example of one way — a relatively easy way — to publish linked data off the back of some existing api. i hope that some other libraries, archives, and museums with their own api will adopt this approach and start publishing their data in a standard linked data style, so it can be linked up with the wider web of data. continue reading linked open data built from a custom web api posted on september , april , tags cidoc-crm, json, linked data, lodlam, museum, proxy, rest, web api, xproc-z comments on linked open data built from a custom web api posts navigation page page page next page proudly powered by wordpress scriptio continua scriptio continua thoughts on software development, digital humanities, the ancient world, and whatever else crosses my radar. all original content herein is licensed under a creative commons attribution license. friday, june , reminder in the midst of the ongoing disaster that has befallen the country, i had a reminder recently that healthcare in the usa is still a wreck. when i had my episode of food poisoning (or whatever it was) in michigan recently, my concerned wife took me to an urgent care. we of course had to pay out-of-pocket for service (about $ ), as we were way outside our network (the group of providers who have agreements with our insurance company). i submitted the paperwork to our insurance company when we got home (duke uses aetna), to see if they would reimburse some of that amount. nope. rejected, because we didn't call them first to get approval—not something you think of at a time like that. thank god i waved off the responders when my daughter called them after i first got sick and almost passed out. we might have been out thousands of dollars. and this is with really first-class insurance, mind you. i have great insurance through duke. you can't get much better in this country. people from countries with real healthcare systems find this kind of thing shocking, but it's par for the course here. and our government is actively trying to make it worse. it's just one more bit of dreadful in a sea's worth, but it's worth remembering that the disastrous state of healthcare in the us affects all of us, even the lucky ones with insurance through our jobs. and again, our government is trying its best to make it worse. you can be quite sure it will be worse for everyone. posted by unknown at : am no comments: email thisblogthis!share to twittershare to facebookshare to pinterest monday, may , experiencing technical difficulties i've been struggling with a case of burnout for a while now. it's a common problem in programming, where we have to maintain a fairly high level of creative energy all the time, and unlike my colleagues in academia or the library, i'm not eligible for research leave or sabbaticals. vacation is the only opportunity for recharging my creative batteries, but that's hard too when there are a lot of tasks that can't wait. i have taken the day off to work before, but that just seems stupid. so i grind away, hoping the fog will lift. a few weeks ago, the kids and i joined my wife on a work trip to michigan. it was supposed to be a mini-vacation for us, but i got violently ill after lunch one day—during a umich campus tour. it sucked about as much as it possibly could. my marvelous elder daughter dealt with the situation handily, but of course we ended up missing most of the tour, and i ended up in bed the rest of the day, barring the occasional run to the bathroom. my world narrowed down to a point. i was quite happy to lie there, not thinking. i could have read or watched television, but i didn't want to. trying the occasional sip of gatorade was as much as i felt like. for someone who normally craves input all the time, it was very peaceful. it revealed to me again on how much of a knife-edge my consciousness really is. it would take very little to knock it off the shelf to shatter on the ground. my father has alzheimer's disease, and this has already happened to him. where once there was an acutely perceptive and inquiring mind, there remains only his personality, which seems in his case to be the last thing to go. i try to spend time with him at least once or twice a week, both to take a little pressure off my mother and to check on their general well-being. we take walks. physically, he's in great shape for a man in his s. and there are still flashes of the person he was. he can't really hold a conversation, and will ask the same questions over and over again, my answers slipping away as soon as they're heard, but as we walked the other day, accompanied by loud birdsong, he piped up "we hear you!" to the birds, his sense of humor suddenly back on the surface. we are lucky that my parents have fantastic insurance and a good retirement plan, courtesy of an employer, the episcopal church, that cares about its people beyond the period of their usefulness. burnout is a species of depression, really. it is the same sort of thing as writer's block. your motivation simply falls out from under you. you know what needs to be done, but it's hard to summon the energy to do it. the current political climate doesn't help, as we careen towards the cliff's edge like the last ride of thelma and louise, having (i hope metaphorically, but probably not for many of us) chosen death over a constrained future, for the sake of poking authority in the eye. my children will suffer because the baby boomers have decided to try to take it all with them, because as a society we've fallen in love with death. all we can do really is try to arm the kids against the hard times to come, their country having chosen war, terror, and oppression in preference to the idea that someone undeserving might receive any benefit from society. we gen-xers at least had some opportunity to get a foot on the ladder. their generation will face a much more tightly constrained set of choices, with a much bigger downside if they make the wrong ones. i don't write much about my children online, because we want to keep them as much as possible out of the view of the social media panopticon until they're mature enough to make their own decisions about confronting it. at least they may have a chance to start their lives without the neoliberal machine knowing everything about them. they won't have anything like the support i had, and when we've dismantled our brief gesture towards health care as a human right and insurance decisions are made by ais that know everything about you going back to your childhood, things are going to be quite difficult. a symptom, i think, of my burnout is my addiction to science fiction and urban fantasy novels. they give me a chance to check out from the real world for a while, but i think it's become a real addiction rather than an escape valve. our society rolls ever forward toward what promises to be an actual dystopia with all the trappings: oppressed, perhaps enslaved underclasses, policed by unaccountable quasi-military forces, hyper-wealthy elites living in walled gardens with the latest technology, violent and unpredictable weather, massive unemployment and social unrest, food and water shortages, and ubiquitous surveillance. escapism increasingly seems unwise. some of that future can be averted if we choose not to be selfish and paranoid, to stop oppressing our fellow citizens and to stop demonizing immigrants, to put technology at the service of bettering society and surviving the now-inevitable changes to our climate. but we are not making good choices. massive unemployment is a few technological innovations away. it doesn't have to be a disaster, indeed it could lead to a renaissance, but i think we're too set in our thinking to avoid the disaster scenario. the unemployed are lazy after all, our culture tells us, they must deserve the bad things that have happened to them. our institutions are set up to push them back towards work by curtailing their benefits. but it could never happen to me, could it? and that comes back around to why i try to grind my way through burnout rather than taking time to recover from it. i live in an "at will" state. i could, in theory, be fired because my boss saw an ugly dog on the way in to work. that wouldn't happen, i hasten to say—i work with wonderful, supportive people. but there are no guarantees to be had. people can be relied on, but institutions that have not been explicitly set up to support us cannot, and institutional structures and rules tend to win in the end. best to keep at it and hope the spark comes back. it usually does. posted by unknown at : pm no comments: email thisblogthis!share to twittershare to facebookshare to pinterest monday, february , thank you back in the day, joel spolsky had a very influential tech blog, and one of the pieces he wrote described the kind of software developer he liked to hire, one who was "smart, and gets things done." he later turned it into a book (http://www.amazon.com/smart-gets-things-done-technical/dp/ ). steve yegge, who was also a very influential blogger in the oughties, wrote a followup, in which he tackled the problem of how you find and hire developers who are smarter than you. given the handicaps of human psychology, how do you even recognize what you're looking at? his rubric for identifying these people (flipping spolsky's) was "done, and gets things smart". that is, this legendary " x" developer was the sort who wouldn't just get done the stuff that needed to be done, but would actually anticipate what needed to be done. when you asked them to add a new feature, they'd respond that it was already done, or that they'd just need a few minutes, because they'd built things in such a way that adding your feature that you just thought of would be trivial. they wouldn't just finish projects, they'd make everything better—they'd create code that other developers could easily build upon. essentially, they'd make everyone around them more effective as well. i've been thinking a lot about this over the last few months, as i've worked on finishing a project started by sebastian rahtz: integrating support for the new "pure odd" syntax into the tei stylesheets. the idea is to have a tei syntax for describing the content an element can have, rather than falling back on embedded relaxng. lou burnard has written about it here: https://jtei.revues.org/ . sebastian wrote the xslt stylesheets and the supporting infrastructure which are both the reference implementation for publishing tei and the primary mechanism by which the tei guidelines themselves are published. and they are the basis of tei schema generation as well. so if you use tei at all, you have sebastian to thank. picking up after sebastian's retirement last year has been a tough job. it was immediately obvious to me just how much he had done, and had been doing for the tei all along. when gabriel bodard described to me how the tei council worked, after i was elected for the first time, he said something like: "there'll be a bunch of people arguing about how to implement a feature, or even whether it can be done, and then sebastian will pipe up from the corner and say 'oh, i just did it while you were talking.'" you only have to look at the contributors pages for both the tei and the stylesheets to see that sebastian was indeed operating at a x level. quietly, without making any fuss about it, he's been making the tei work for many years. the contributions of software developers are often easily overlooked. we only notice when things don't work, not when everything goes smoothly, because that's what's supposed to happen, isn't it? even in digital humanities, which you'd expect to be self-aware about this sort of thing, the intellectual contributions of software developers can often be swept under the rug. so i want to go on record, shouting a loud thank you to sebastian for doing so much and for making the tei infrastructure smart. ***** update - - i heard the sad news last night that sebastian passed away yesterday on the ides of march. we are much diminished by his loss. posted by unknown at : pm comment: email thisblogthis!share to twittershare to facebookshare to pinterest friday, october , dh data talk last night i was on a panel organized by duke libraries' digital scholarship group. the panelists each gave some brief remarks and then we had what i thought was a really productive and interesting discussion. the following are my own remarks, with links to my slides (opens a new tab). in my notes, //slide// means click forward (not always to a new slide, maybe just a fragment). this is me, and i work //slide// for this outfit. i'm going to talk just a little about a an old project and a new one, and not really give any details about either, but surface a couple of problems that i hope will be fodder for discussion. //slide// the old project is papyri.info and publishes all kinds of data about ancient documents mostly written in ink on papyrus. the new one, integrating digital epigraphies (ides), is about doing much the same thing for ancient documents mostly incised on stone. if i had to characterize (most of) the work i'm doing right now, i'd say i'm working on detecting and making machine-actionable the scholarly links and networks embedded in a variety of related projects, with data sources including plain text, xml, relational databases, web services, and images. these encompass critical editions of texts (often in large corpora), bibliography, citations in books and articles, images posted on flickr, and databases of texts. you could think of what i'm doing as recognizing patterns and then converting those into actual links; building a scaffold for the digital representation of networks of scholarship. this is hard work. //slide// it's hard because while superficial patterns are easy to detect, //slide// without access to the system of thought underlying those patterns (and computers can't do that yet—maybe never), those patterns are really just proxies kicked up by the underlying system. they don't themselves have meaning, but they're all you have to hold on to. //slide// our brains (with some prior training) are very good at navigating this kind of mess, but digital systems require explicit instructions //slide// —though granted, you can sometimes use machine learning techniques to generate those. when i say i'm working on making scholarly networks machine actionable, i'm talking about encoding as digital relations the graph of references embedded in these books, articles and corpora, and in the metadata of digital images. there are various ways one might do this, and the one we're most deeply into right now is called //slide// rdf. rdf models knowledge as a set of simple statements in the form subject, predicate, object. //slide// so a cites b, for example. rdf is a web technology, so all three of these elements may be uris that you could open in a web browser, //slide// and if you use uris in rdf, then the object of one statement can be the subject of another, and so on. //slide// so you can use it to model logical chains of knowledge. now notice that these statements are axioms. you can't qualify them, at least not in a fine-grained way. so this works great in a closed system (papyri.info), where we get to decide what the facts are; it's going to be much more problematic in ides, where we'll be coordinating data from at least half a dozen partners. partners who may not agree on everything. //slide// what i've got is the same problem from a different angle—i need to model a big pile of opinion but all i have to do it with are facts. part of the solution to these problems has to be about learning how to make the insertion of machine-actionable links and facts (or at least assertions), part of—that is, a side-effect of—the normal processes of resource creation and curation. but it also has to be about building systems that can cope with ambiguity and opinion. posted by unknown at : am no comments: email thisblogthis!share to twittershare to facebookshare to pinterest wednesday, september , outside the tent yesterday was a bad day. i’m chasing a messed-up software problem whose main symptom is the application consuming all available memory and then falling over without leaving a useful stacktrace. steve ramsay quit twitter. a colleague i have huge respect for is leaving a project that’s foundational and is going to be parked because of it (that and the lack of funding). this all sucks. as i said on twitter, it feels like we’ve hit a tipping point. i think dh has moved on and left a bunch of us behind. i have to start this off by saying that i really have nothing to complain about, even if some of this sounds like whining. i love my job, my colleagues, and i’m doing my best to get over being a member of a carolina family working at duke :-). i’m also thinking about these things a lot in the run up to speaking in code. for some time now i’ve been feeling uneasy about how i should present myself and my work. a few years ago, i’d have confidently said i work on digital humanities projects. before that, i was into humanities computing. but now? i’m not sure what i do is really dh any more. i suspect the dh community is no longer interested in the same things as people like me, who write software to enable humanistic inquiry and also like to think (and when possible write and teach) about how that software instantiates ideas about the data involved in humanistic inquiry. on one level, this is fine. time, and academic fashion, marches on. it is a little embarrassing though given that i’m a “senior digital humanities programmer”. moreover, the field of “programming” daily spews forth fresh examples of unbelievable, poisonous, misogyny and seems largely incapable of recognizing what a shitty situation its in because of it. the tech industry is in moral crisis. we live in a dystopian, panoptic geek revenge fantasy infested by absurd beliefs in meritocracy, full of entrenched inequalities, focused on white upper-class problems, inherently hostile to minorities, rife with blatant sexism and generally incapable of reaching anyone beyond early adopter audiences of people just like us. (from https://medium.com/about-work/f ccd a c ) i think communities who fight against this kind of oppression, like #dhpoco, for example, are where dh is going. but while i completely support them and think they’re doing good, important work, i feel a great lack of confidence that i can participate in any meaningful way in those conversations, both because of the professional baggage i bring with me and because they’re doing a different kind of dh. i don’t really see a category for the kinds of things i write about on dhthis or dhnow, for example. if you want to be part of a community that helps define #digitalhumanities please join and promote #dhthis today! http://t.co/vtwjtgqbgr — adeline koh (@adelinekoh) september , this is great stuff, but it’s also not going to be a venue for me wittering on about digital classics or text encoding. it could be my impostor syndrome kicking in, but i really doubt they’re interested. it does seem like a side-effect of the shift toward a more theoretical dh is an environment less welcoming to participation by “staff”. it’s paradoxical that the opening up of dh also comes with a reversion to the old academic hierarchies. i’m constantly amazed at how resilient human insitutions are. if digital humanities isn’t really what i do, and if programmer comes with a load of toxic slime attached to it, perhaps “senior” is all i have left. of course, in programmer terms, “senior” doesn’t really mean “has many years of experience”, it’s code for “actually knows how to program”. you see ads for senior programmers with - years of experience all the time. by that standard, i’m not senior, i’m ancient. job titles are something that come attached to staff, and they are terrible, constricting things. i don’t think that what i and many of my colleagues do has become useless, even if we no longer fit the dh label. it still seems important to do that work. maybe we’re back to doing humanities computing. i do think we’re mostly better off because digital humanities happened, but maybe we have to say goodbye to it as it heads off to new horizons and get back to doing the hard work that needs to be done in a humanities that’s at least more open to digital approaches than it used to be. what i’m left wondering is where the place of the developer (and, for that matter other dh collaborators) is in dh if dh is now the establishment and looks structurally pretty much like the old establishment did. is digital humanities development a commodity? are dh developers interchangeable? should we be? programming in industry is typically regarded as a commodity. programmers are in a weird position, both providers of indispensable value, and held at arm’s length. the problem businesses have is how to harness a resource that is essentially creative and therefore very subject to human inconsistency. it’s hard to find good programmers, and hard to filter for programming talent. programmers get burned out, bored, pissed off, distracted. best to keep a big pool of them and rotate them out when they become unreliable or too expensive or replace them when they leave. comparisons to graduate students and adjunct faculty may not escape the reader, though at least programmers are usually better-compensated. academia has a slightly different programmer problem: it’s really hard to find good dh programmers and staffing up just for a project may be completely impossible. the only solution i see is to treat it as analogous to hiring faculty: you have to identify good people and recruit them and train people you’d want to hire. you also have to give them a fair amount of autonomy—to deal with them as people rather than commodities. what you can’t count on doing is retaining them as contingent labor on soft money. but here we’re back around to the faculty/staff problem: the institutions mostly only deal with tenure-track faculty in this way. libraries seem to be the only academic institutions capable of addressing the problem at all. but they’re also the insitutions most likely to come under financial pressure and they have other things to worry about. it’s not fair to expect them to come riding over the hill. the ideal would situation would be if there existed positions to which experts could be recruited who had sufficient autonomy to deal with faculty on their own level (this essentially means being able to say ‘no’), who might or might not have advanced degrees, who might teach and/or publish, but wouldn’t have either as their primary focus. they might be librarians, or research faculty, or something else we haven’t named yet. all of this would cost money though. what’s the alternative? outsourcing? be prepared to spend all your grant money paying industry rates. grad students? many are very talented and have the right skills, but will they be willing to risk sacrificing the chance of a faculty career by dedicating themselves to your project? will your project be maintainable when they move on? mia ridge, in her twitter feed, reminds me that in england there exist people called “research software engineers”. notes from #rse breakout discussions appearing at https://t.co/pd itlbb t - lots of resonances with #musetech #codespeak — mia (@mia_out) september , there are worse labels, but it sounds like they have exactly the same set of problems i’m talking about here. posted by unknown at : pm comments: email thisblogthis!share to twittershare to facebookshare to pinterest monday, july , missing dh i'm watching the tweets from #dh starting to roll in and feeling kind of sad (and, let's be honest, left out) not to be there. conference attendance has been hard the last few years because i didn't have any travel funding in my old job. so i've tended only to go to conferences close to home or where i could get grant funding to pay for them. it's also quite hard sometimes to decide what conferences to go to. on a self-funded basis, i can manage about one a year. so deciding which one can be hard. i'm a technologist working in a library, on digital humanities projects, with a focus on markup technologies and on ancient studies. so my list is something like: dh jcdl one of many language-focused conferences the tei annual meeting balisage i could also make a case for conferences in my home discipline, classics, but i haven't been to the apa annual meeting in over a decade. now that the digital classics association exists, that might change. i tend to cycle through the list above. last year i went to the tei meeting, the year before, i went to clojure/conj and dh (because a grant paid). the year before that, i went to balisage, which is an absolutely fabulous conference if you're a markup geek like me (seriously, go if you get the chance). dh is a nice compromise though, because you get a bit of everything. it's also attended by a whole bunch of my friends, and people i'd very much like to become friends with. i didn't bother submitting a proposal for this year, because my job situation was very much up in the air at the time, and indeed, i started working at dc just a couple of weeks ago. dh would have been unfeasible for all kinds of reasons, but i'm still bummed out not to be there. have a great time y'all. i'll be following from a distance. posted by unknown at : pm no comments: email thisblogthis!share to twittershare to facebookshare to pinterest wednesday, february , first contact it seems like i've had many versions of this conversation in the last few months, as new projects begin to ramp up: client: i want to do something cool to publish my work. developer: ok. tell me what you'd like to do. client: um. i need you to to tell me what's possible, so i can tell you what i want. developer: we can do pretty much anything. i need you to tell me what you want so i can figure out how to make it. almost every introductory meeting with a client/customer starts out this way. there's a kind of negotiation period where we figure out how to speak each other's language, often by drawing crude pictures. we look at things and decide how to describe them in a way we both understand. we wave our hands in the air and sometimes get annoyed that the other person is being so dense. it's crucially important not to short-circuit this process though. you and your client likely have vastly different understandings of what can be done, how hard it is to do what needs to be done, and even whether it's worth doing. the initial negotiation sets the tone for the rest of the relationship. if you hurry through it, and let things progress while there are still major misunderstandings in the air, bad things will certainly happen. like: client: this isn't what i wanted at all! developer: but i built exactly what you asked for! posted by unknown at : am no comments: email thisblogthis!share to twittershare to facebookshare to pinterest older posts home subscribe to: posts (atom) followers blog archive ▼  ( ) ▼  june ( ) reminder ►  may ( ) ►  ( ) ►  february ( ) ►  ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  february ( ) ►  ( ) ►  april ( ) ►  march ( ) ►  ( ) ►  november ( ) ►  june ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  may ( ) ►  march ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  august ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  may ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  october ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  august ( ) ►  ( ) ►  october ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  ( ) ►  october ( ) about me unknown view my complete profile awesome inc. theme. powered by blogger. none coyle's information: phil agre and the gendered internet coyle's information comments on the digital age, which, as we all know, is . thursday, august , phil agre and the gendered internet there is an article today in the washington post about the odd disappearance of a computer science professor named phil agre.  the article, entitled "he predicted the dark side of the internet years ago. why did no one listen?" reminded me of a post by agre in after a meeting of computer professionals for social responsibility. although it annoyed me at the time, a talk that i gave there triggered in him thoughts of gender issues;  as a women i was very much in the minority at the meeting,  but that was not the topic of my talk. but my talk also gave agre thoughts about the missing humanity on the web. i had a couple of primary concerns, perhaps not perfectly laid out, in my talk, "access, not just wires." i was concerned about what was driving the development of the internet and the lack of a service ethos regarding society. access at the time was talked about in terms of routers, modems, t- lines. there was no thought to organizing or preserving of online information. there was no concept of "equal access". there was no thought to how we would democratize the web such that you didn't need a degree in computer science to find what you needed. i was also very concerned about the commercialization of information. i was frustrated watching the hype as information was touted as the product of the information age. (this was before we learned that "you are the product, not the user" in this environment.) seen from the tattered clothes and barefoot world of libraries, the money thrown at the jumble of un-curated and unorganized "information" on the web was heartbreaking. i said: "it's clear to me that the information highway isn't much about information. it's about trying to find a new basis for our economy. i'm pretty sure i'm not going to like the way information is treated in that economy. we know what kind of information sells, and what doesn't. so i see our future as being a mix of highly expensive economic reports and cheap online versions of the national inquirer. not a pretty picture." - kcoyle in access, not just wires  little did i know how bad it would get. like many or most people, agre heard "libraries" and thought "female." but at least this caused him to think, earlier than many, about how our metaphors for the internet were inherently gendered. "discussing her speech with another cpsr activist ... later that evening, i suddenly connected several things that had been bothering me about the language and practice of the internet. the result was a partial answer to the difficult question, in what sense is the net "gendered"?" -  agre, tno, october this led agre to think about how we spoke then about the internet, which was mainly as an activity of "exploring." that metaphor is still alive with microsoft's internet explorer, but was also the message behind the main web browser software of the time, netscape navigator. he suddenly saw how "explore" was a highly gendered activity: "yet for many people, "exploring" is close to defining the experience of the net. it is clearly a gendered metaphor: it has historically been a male activity, and it comes down to us saturated with a long list of meanings related to things like colonial expansion, experiences of otherness, and scientific discovery. explorers often die, and often fail, and the ones that do neither are heroes and role models. this whole complex of meanings and feelings and strivings is going to appeal to those who have been acculturated into a particular male-marked system of meanings, and it is not going to offer a great deal of meaning to anyone who has not. the use of prestigious artifacts like computers is inevitably tied up with the construction of personal identity, and "exploration" tools offer a great deal more traction in this process to historically male cultural norms than to female ones." - agre, tno, october he decried the lack of social relationships on the internet, saying that although you know that other  people are there, you cannot see them.  "why does the space you "explore" in gopher or mosaic look empty even when it's full of other people?" - agre, tno, october none of us knew at the time that in the future some people would experience the internet entirely and exclusively as full of other people in the forms of facebook, twitter and all of the other sites that grew out of the embryos of bulletin board systems, the well, and aol. we feared that the future internet would  not have the even-handedness of libraries, but never anticipated that russian bots and qanon promoters would reign over what had once been a network for the exchange of scientific information. it hurts now to read through agre's post arguing for a more library-like online information system because it is pretty clear that we blew through that possibility even before the meeting and were already taking the first steps toward to where we are today. agre walked away from his position at ucla in and has not resurfaced, although there have been reports at times (albeit not recently) that he is okay. looking back, it should not surprise us that someone with so much hope for an online civil society should have become discouraged enough to leave it behind. agre was hoping for reference services and an internet populated with users with: "...the skills of composing clear texts, reading with an awareness of different possible interpretations, recognizing and resolving conflicts, asking for help without feeling powerless, organizing people to get things done, and embracing the diversity of the backgrounds and experiences of others." - agre, tno, october  oh, what world that would be! posted by karen coyle at : pm labels: internet, women and technology comments: unknown said... i hear you karen coyle. which is to say i wish i and others had heard phil agre back in . i had not heard of phil agre until today's washington post article. as you point out, phil was far ahead of most of us in his appreciation of gender issues. in https://pages.gseis.ucla.edu/faculty/agre/tno/april- .html#networking phil's comments on the perpetuation of democracy through education strike deep. if only we had listened, but we rarely are able to do so. the velocity of current events and the scope of information and misinformation presented through the network is exceeding individual human cognitive bandwidth. i fear, much as phil agre may, that we may be lost in a higher order of organization which we cannot fully comprehend. we may now appreciate, but are unable to control, the social and economic forces which direct our destiny. with this in mind, i will read phil agre's publications online. thank you, pat nance / / : pm karen coyle said... thanks for the reminder that some of his work is still archived on his ucla page. here's the "root" link: https://pages.gseis.ucla.edu/faculty/agre/ and, as a mac user, i definitely should have added safari to the list of browsers. even worse than exploring, safari implies explore and kill. yikes! / / : am post a comment older post home subscribe to: post comments (atom) copyright coyle's information by karen coyle is licensed under a creative commons attribution-noncommercial . united states license. karen karen coyle where i'll be dc , ottawa, sep - what i'm reading paper, m. kurlansky the coming of the third reich, r. j. evans a room of ones own, v. woolf blog archive ▼  ( ) ▼  august ( ) phil agre and the gendered internet ►  march ( ) ►  ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  august ( ) ►  ( ) ►  october ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) labels library catalogs ( ) googlebooks ( ) frbr ( ) cataloging ( ) linked data ( ) rda ( ) rdf ( ) oclc ( ) marc ( ) copyright ( ) metadata ( ) women and technology ( ) digital libraries ( ) semantic web ( ) digitization ( ) books ( ) intellectual freedom ( ) bibframe ( ) open access ( ) internet ( ) women technology ( ) standards ( ) classification ( ) ebooks ( ) google ( ) rda dcmi ( ) classification lcsh ( ) kosovo ( ) dcmi ( ) er models ( ) openlibrary ( ) authority control ( ) libraries ( ) skyriver ( ) wish list ( ) identifiers ( ) library history ( ) open data ( ) reading ( ) schema.org ( ) search ( ) vocabularies ( ) wikipedia ( ) knowledge organization ( ) privacy ( ) women ( ) drm ( ) foaf ( ) internet archive ( ) lrm ( ) rfid ( ) shacl ( ) application profiles ( ) controlled digital lending ( ) lcsh intell ( ) linux ( ) names ( ) politics ( ) about me karen coyle berkeley, ca, united states librarian, techie, social commentator, once called "public intellectual" by someone who couldn't think of a better title. view my complete profile simple theme. theme images by gaffera. powered by blogger. none libx has been decommissioned for a while, ever since the google store booted us (for no real reason). we are keeping possession of this domain name since installed clients may still pull libapps from this server. none none none planet eric lease morgan planet eric lease morgan searching project gutenberg at the distant reader searching cord- at the distant reader distant reader workshop hands-on activities distant reader ptpbio and the reader reading texts through the use of network graphs the works of horace, bound how to write in a book tei toolbox, or "how a geeky librarian reads horace" cool hack with wget and xmllint my integrated development environment (ide) final blog posting openrefine and the distant reader topic modeling tool – enumerating and visualizing latent themes the distant reader and concordancing with antconc the distant reader workbook wordle and the distant reader the distant reader and a web-based demonstration distant reader “study carrels”: a manifest a distant reader field trip to bloomington what is the distant reader and why should i care? project gutenberg and the distant reader ojs toolbox the distant reader and its five different types of input invitation to hack the distant reader fantastic futures: my take-aways marc catalog charting & graphing with tableau public charting & graphing with tableau public extracting parts-of-speech and named entities with stanford tools extracting parts-of-speech and named entities with stanford tools creating a plain text version of a corpus with tika creating a plain text version of a corpus with tika identifying themes and clustering documents using mallet identifying themes and clustering documents using mallet introduction to the nltk introduction to the nltk using voyant tools to do some “distant reading” using voyant tools to do some “distant reading” project english: an index to english/american literature spanning six centuries using a concordance (antconc) to facilitate searching keywords in context using a concordance (antconc) to facilitate searching keywords in context word clouds with wordle word clouds with wordle an introduction to the nltk: a jupyter notebook an introduction to the nltk: a jupyter notebook what is text mining, and why should i care? what is text mining, and why should i care? lexisnexis hacks freebo@nd and library catalogues how to do text mining in words how to do text mining in words stories: interesting projects i worked on this past year freebo@nd tei json: summarizing the structure of early english poetry and prose synonymizer: using wordnet to create a synonym file for solr tiny road trip: an americana travelogue blueprint for a system surrounding catholic social thought & human rights how not to work during a sabbatical achieving perfection achieving perfection viaf finder viaf finder making stone soup: working together for the advancement of learning and teaching making stone soup: working together for the advancement of learning and teaching protected: simile timeline test editing authorities at the speed of four records per minute editing authorities at the speed of four records per minute failure to communicate failure to communicate using bibframe for bibliographic description using bibframe for bibliographic description xml xml mr. serials continues mr. serials continues re-marcable re-marcable marc, marcxml, and mods marc, marcxml, and mods “sum reflextions” on travel “sum reflextions” on travel what is old is new again what is old is new again painting in tuscany painting in tuscany my water collection predicts the future my water collection predicts the future jstor workset browser early english love was black & white some automated analysis of richard baxter’s works some automated analysis of richard baxter’s works some automated analysis of ralph waldo emerson’s works some automated analysis of henry david thoreau’s works eebo-tcp workset browser developments with eebo boxplots, histograms, and scatter plots. oh, my! hathitrust workset browser on github hathitrust resource center workset browser marrying close and distant reading: a thatcamp project marrying close and distant reading: a thatcamp project text files hands-on text analysis workshop distance.cgi – my first python-based cgi script great books survey great books survey my second python script, dispersion.py my first r script, wordcloud.r my first python script, concordance.py doing what i’m not suppose to do doing what i’m not suppose to do hundredth psalm to the tune of "green sleeves": digital approaches to shakespeare's language of genre publishing lod with a bent toward archivists publishing lod with a bent toward archivists theme from macroanalysis: digital methods and literary history (topics in the digital humanities) fun with koha fun with koha matisse: "jazz" jazz, (henri matisse) context for the creation of jazz lexicons and sentiment analysis – notes to self what’s eric reading? librarians and scholars: partners in digital humanities digital scholarship in the humanities a creative arts the huni virtual laboratory digital collections as research infrastructure fun with elasticsearch and marc fun with elasticsearch and marc visualising data: a travelogue orcid outreach meeting (may & , ) crossref’s text and data mining (tdm) api ranking and extraction of relevant single words in text level statistics of words: finding keywords in literary texts and symbolic sequences corpus stylistics, stylometry, and the styles of henry james narrative framing of consumer sentiment in online restaurant reviews code lib jobs topic linked archival metadata: a guidebook (version . ) trends and gaps in linked data for archives liam guidebook: executive summary rome in three days, an archivists introduction to linked data publishing rome in a day, the archivist on a linked data pilgrimage way four “itineraries” for putting linked data into practice for the archivist italian lectures on semantic web and linked data linked archival metadata: a guidebook the d printing working group is maturing, complete with a shiny new mailing list what is linked data and why should i care? impressed with reload digital humanities and libraries tiny text mining tools three rdf data models for archival collections liam guidebook – a new draft linked data projects of interest to archivists (and other cultural heritage personnel) rdf tools for the archivist semantic web browsers writing a book university of notre dame -d printing working group semantic web application sparql tutorial crossref’s prospect api analyzing search results using jstor’s data for research liam source code: perl poetry liam source code: perl poetry linked data and archival practice: or, there is more than one way to skin a cat. archival linked data use cases beginner’s glossary to linked data rdf serializations curl and content-negotiation questions from a library science student about rdf and linked data paper machines linked archival metadata: a guidebook — a fledgling draft rdf ontologies for archival descriptions simple text analysis with voyant tools liam guidebook tools liam guidebook linked data sites liam guidebook citations publishing archival descriptions as linked data via databases publishing linked data by way of ead files semantic web in libraries liam sparql endpoint liam sparql endpoint initial pile of rdf illustrating rdf transforming marc to rdf tiny list of part-of-speech taggers simple linked data recipe for libraries, museums, and archives oai lod rdf triple stores fun with bibliographic indexes, bibliographic data management software, and z . quick and dirty website analysis ead rdf ead rdf oai lod server oai lod server network detroit and great lakes thatcamp data information literacy @ purdue -d printing in the center for digital scholarship initialized a list of tools in the liam guidebook, plus other stuff guidebook moved to liamproject hathitrust research center perl library what is linked data and why should i care? jane & ade stevenson as well as locah and linking lives linking lives challenges of linked open data linked archival metadata: a guidebook drive by shared data: a travelogue beth plale, yiming sun, and the hathitrust research center jstor tool — a programatic sketch matt sag and copyright catholic pamphlets workflow copyright and the digital humanities digital scholarship grilled cheese lunch editors across campus: a reverse travelogue digital humanities and the liberal arts introduction to text mining welcome! genderizing names editors across the campus visualization and gis ted underwood and “learning what we don’t know about literary history” visualizations and geographic information systems a couple of open access week events new media from the middle ages to the digital age ted underwood dh lunch # so many editors! digital humanities centers lunch and lightning talks inaugural digital humanities working group lunch: meeting notes yet more about hathitrust items inaugural digital humanities lunch granting opportunity visualization tools notre dame digital humanities mailing list serial publications with editors at notre dame exploiting the content of the hathitrust, epilogue exploiting the content of the hathitrust, continued exploiting the content of the hathitrust computational methods in the humanities and sciences patron-driven acquisitions: a symposium lourdes, france e-reading: a colloquium at the university of toronto summarizing the state of the catholic youth literature project summary of the catholic pamphlets project patron-driven acquisitions: a symposium at the university of notre dame value and benefits of text mining hello, world users, narcissism and control – tracking the impact of scholarly publications in the st century digital research data sharing and management from stacks to the web: the transformation of academic library collecting emotional intelligence interim report: interviews with research support professionals research infrastructures in the digital humanities trilug, open source software, and satisfaction trilug, open source software, and satisfaction institutional repositories, open access, and scholarly communication: a study of conflicting paradigms catholic pamphlets digitized field trip to the mansueto library at the university of chicago scholarly publishing presentations tablet-base “reading” big tent digital humanities meeting catholic pamphlets and practice workflow river jordan at yardenit (israel) use & understand: a dpla beta-sprint proposal use & understand: a dpla beta-sprint proposal catholic youth literature project update catholic youth literature project: a beginning pot-luck picnic and mini-disc golf tournament code lib midwest: a travelogue raising awareness of open access publications raising awareness of open access publications poor man’s restoration poor man’s restoration my dpla beta-sprint proposal: the movie my dpla beta-sprint proposal: the movie trip to the internet archive, fort wayne (indiana) draftreportwithtransclusion lld vocabularies and datasets usecasereport digital humanities implementation grants reading revolutions: online digital text and implications for reading in academe report and recommendations of the u.s. rda test coordinating committee: executive summary usability testing of vufind at an academic library dpla beta sprint submission dpla beta sprint submission the catholic pamphlets project at the university of notre dame digging into data using new collaborative infrastructures supporting humanities-based computer science research next-generation library catalogs, or ‘are we there yet?’ next-generation library catalogs, or ‘are we there yet?’ hathitrust: a research library at web scale rapid capture: faster throughput in digitization of special collections fun with rss and the rss aggregator called planet fun with rss and the rss aggregator called planet research data inventory book reviews for web app development book reviews for web app development data management day alex lite (version . ) alex lite (version . ) where in the world is the mail going? where in the world is the mail going? constant chatter at code lib constant chatter at code lib data management & curation groups how “great” are the great books? how “great” are the great books? code lib conference, code lib conference, subject librarian's guide to collaborating on e-science projects skilling up to do data: whose role, whose responsibility, whose career? words, patterns and documents: experiments in machine learning and text analysis vive la différence! text mining gender difference in french literature gender, race, and nationality in black drama, - : mining differences in language use in authors and their characters how to write a data management plan for a national science foundation (nsf) proposal meeting funders’ data policies: blueprint for a research data management service group (rdmsg) data curation at the university of california, san diego: partnerships and networks conducting a data interview e-science and data support services a study of arl member institutions cloud-sourcing research collections: managing print in the mass-digitized library environment advanced scholar research with the knowledge kiosk horizon report, edition making data maximally available managing research data foray’s into parts-of-speech foray’s into parts-of-speech elements of a data management plan kotter's -step change model visualizing co-occurrences with protovis visualizing co-occurrences with protovis mit’s simile timeline widget mit’s simile timeline widget th international data curation conference two more data creator interviews three data webinars implementing open access: policy case studies illustrating idcc illustrating idcc ruler & compass by andrew sutton ruler & compass by andrew sutton text mining charles dickens text mining charles dickens angelfund code lib angelfund code lib crowd sourcing the great books crowd sourcing the great books great books data set great books data set data tsunamis and explosions david dickinson and new testament manuscripts data curation at ecdl ecdl : a travelogue ecdl : a travelogue xforms for libraries, an introduction automatic aggregation of faculty publications from personal web pages dan marmion dan marmion interpreting marc: where’s the bibliographic data? why purchase when you can repurpose? using crosswalks to enhance user access hacking summon editorial introduction – a cataloger’s perspective on the code lib journal managing library it workflow with bugzilla selected internet resources on digital research data curation undiscovered public knowledge undiscovered public knowledge: a ten-year update diddling with data great books data dictionary great books data dictionary data curation in purdue twitter, facebook, delicious, and alex twitter, facebook, delicious, and alex where in the world are windmills, my man friday, and love? where in the world are windmills, my man friday, and love? river teith at doune castle (scotland) river clyde at bothwell castle (scotland) ngrams, concordances, and librarianship ngrams, concordances, and librarianship lingua::en::bigram (version . ) lingua::en::bigram (version . ) lingua::en::bigram (version . ) lingua::en::bigram (version . ) cool uris cool uris hello world! rsync, a really cool utility rsync, a really cool utility social side of science data sharing: distilling past efforts preserving research data retooling libraries for the data challenge university investment in the library, phase ii: an international study of the library's value to the grants process doing ocr against new testament manuscripts steps toward large-scale data integration in the sciences: summary of a workshop wilsworld, wilsworld, digital humanities : a travelogue digital humanities : a travelogue digital repository strategic information gathering project data-enabled science in the mathematical and physical sciences how “great” is this article? how “great” is this article? river thames at windsor castle ala ala principles and good practice for preserving data text mining against ngc lib text mining against ngc lib the next next-generation library catalog the next next-generation library catalog measuring the great books measuring the great books collecting the great books collecting the great books inaugural code lib “midwest” regional meeting inaugural code lib “midwest” regional meeting how “great” are the great books? how “great” are the great books? not really reading not really reading cyberinfrastructure days at the university of notre dame cyberinfrastructure days at the university of notre dame about infomotions image gallery: flickr as cloud computing about infomotions image gallery: flickr as cloud computing shiny new website shiny new website grand river at grand rapids (michigan) counting words counting words open source software and libraries: a current swot analysis great ideas coefficient great ideas coefficient indexing and abstracting my first epub file my first epub file alex catalogue widget alex catalogue widget michael hart in roanoke (indiana) michael hart in roanoke (indiana) preservationists have the most challenging job preservationists have the most challenging job how to make a book (# of ) how to make a book (# of ) good and best open source software good and best open source software valencia and madrid: a travelogue valencia and madrid: a travelogue colloquium on digital humanities and computer science: a travelogue colloquium on digital humanities and computer science: a travelogue park of the pleasant retreat, madrid (spain) mediterranean sea at valencia (spain) a few possibilities for librarianship by alex catalogue collection policy alex catalogue collection policy alex, the movie! alex, the movie! collecting water and putting it on the web (part iii of iii) collecting water and putting it on the web (part iii of iii) collecting water and putting it on the web (part ii of iii) collecting water and putting it on the web (part ii of iii) collecting water and putting it on the web (part i of iii) collecting water and putting it on the web (part i of iii) web-scale discovery services web-scale discovery services how to make a book (# of ) how to make a book (# of ) book review of larry mcmurtry’s books book review of larry mcmurtry’s books browsing the alex catalogue browsing the alex catalogue indexing and searching the alex catalogue indexing and searching the alex catalogue history of science microsoft surface at ball state microsoft surface at ball state what's needed next: a culture of candor frequent term-based text clustering web-scale discovery indexes and "next generation" library catalogs automatic metadata generation automatic metadata generation linked data applications alex on google alex on google top tech trends for ala annual, summer top tech trends for ala annual, summer mass digitization mini-symposium: a reverse travelogue mass digitization mini-symposium: a reverse travelogue atlantic ocean at christ of the abyss statue (key largo, fl) lingua::en::bigram (version . ) lingua::en::bigram (version . ) lingua::concordance (version . ) lingua::concordance (version . ) mississippi river at gateway to the west (st. louis, mo) ead marc ead marc text mining: books and perl modules text mining: books and perl modules interent archive content in “discovery” systems interent archive content in “discovery” systems tfidf in libraries: part iii of iii (for thinkers) tfidf in libraries: part iii of iii (for thinkers) tidal basin at the jefferson memorial (washington, dc) mass digitization and opportunities for librarianship in minutes the decline of books the decline of books implementing user-centered experiences in a networked environment code lib software award: loose ends code lib software award: loose ends tfidf in libraries: part ii of iii (for programmers) tfidf in libraries: part ii of iii (for programmers) ralph waldo emerson’s essays ralph waldo emerson’s essays tfidf in libraries: part i of iii (for librarians) tfidf in libraries: part i of iii (for librarians) statistical interpretation of term specificity and its application in retrieval a day at cil a day at cil quick trip to purdue quick trip to purdue library technology conference, : a travelogue library technology conference, : a travelogue open source software: controlling your computing environment "next-generation" library catalogs mississippi river at st. anthony falls (minneapolis) technology trends and libraries: so many opportunities code lib open source software award code lib open source software award code lib conference, providence (rhode island) code lib conference, providence (rhode island) henry david thoreau’s walden henry david thoreau’s walden eric lease morgan’s top tech trends for ala mid-winter, eric lease morgan’s top tech trends for ala mid-winter, yaac: yet another alex catalogue yaac: yet another alex catalogue isbn numbers isbn numbers fun with webservice::solr, part iii of iii fun with webservice::solr, part iii of iii why you can't find a library book in your search engine fun with webservice::solr, part ii of iii fun with webservice::solr, part ii of iii mr. serials is dead. long live mr. serials mr. serials is dead. long live mr. serials fun with webservice::solr, part i of iii fun with webservice::solr, part i of iii lcsh, skos, and linked data visit to ball state university visit to ball state university a day with ole a day with ole asis&t bulletin on open source software asis&t bulletin on open source software fun with the internet archive fun with the internet archive snow blowing and librarianship snow blowing and librarianship tarzan of the apes tarzan of the apes open source software in libraries: opportunities and expenses worldcat hackathon worldcat hackathon vufind at palinet vufind at palinet next-generation library catalogues: a presentation at libraries australia darling harbor, sydney (australia) lake ontario at hamilton, ontario (canada) lake huron at sarnia (canada) dinner with google dinner with google mylibrary: a digital library framework & toolkit mylibrary: a digital library framework & toolbox mylibrary: a digital library framework & toolbox mbooks, revisited mbooks, revisited wordcloud.pl wordcloud.pl last of the mohicans and services against texts last of the mohicans and services against texts crowd sourcing tei files crowd sourcing tei files metadata and data structures metadata and data structures origami is arscient, and so is librarianship origami is arscient, and so is librarianship on the move with the mobile web on the move with the mobile web tpm — technological protection measures tpm — technological protection measures against the grain is not against the grain is not e-journal archiving solutions e-journal archiving solutions web . and “next-generation” library catalogs web . and “next-generation” library catalogs alex lite: a tiny, standards-compliant, and portable catalogue of electronic texts alex lite: a tiny, standards-compliant, and portable catalogue of electronic texts indexing marc records with marc j and lucene indexing marc records with marc j and lucene encoded archival description (ead) files everywhere encoded archival description (ead) files everywhere extensible catalog (xc): a very transparent approach extensible catalog (xc): a very transparent approach top tech trends for ala (summer ’ ) top tech trends for ala (summer ’ ) google onebox module to search ldap google onebox module to search ldap dlf ils discovery internet task group technical recommendation dlf ils discovery internet task group technical recommendation introduction to the catholic research resources alliance hypernote pro: a text annotating hypercard stack hypernote pro: a text annotating hypercard stack steve cisler steve cisler feather river at paradise, california code lib journal perl module (version . ) open library, the movie! get-mbooks.pl hello, world! cape cod bay at race point next generation data format salto do itiquira open library developer's meeting: one web page for every book ever published atom syndication format getting to know the atom publishing protocol, part : create and edit web resources with the atom publishing protocol atom publishing protocol today's digital information landscape dr. strangelove, or how we learned to live with google next generation library catalogs in fifteen minutes success of open source by steven weber: a book review catalog collectivism: xc and the future of library search headwaters of the missouri river open source software at the montana state university libraries symposium original mylibrary canal surrounding kastellet, copenhagen, denmark sum top tech trends for the summer of lake erie at cedar point amusement park, oh mineral water from puyehue, chile lago paranoa, brazilia (brazil) leading a large group wise crowds with long tails trip to rochester to learn about xc open repositories, : a travelogue unordered list of "top tech trends" whirlwind in windsor surrounding integrated library systems: my symposium notes thinking outside the books: a travel log mylibrary .x and a next generation library catalogue ecdl : a travel log mediterranean sea at alicante (spain) building the "next generation" library catalog institute on scholarly communication: a travel log north channel at laurentian isle, canada american library association annual meeting, joint conference on digital libraries, mississippi river at oak alley plantation rethink the role of the library catalog top tech trends for ala ; "sum" pontifications next generation library catalog what is srw/u? first monday on a tuesday: a travel log ohio valley group of technical services librarian annual meeting being innovative atlantic ocean at the forty steps (newport, ri) mass digitization (again) all things open mass digitization zagreb, croatia: a travel log mylibrary workshop fountain at trg bana jelacica open source software for libraries in minutes library services and in-house software development oai : to cern and back again lake geneva at jet d eau, geneva, switzerland exploiting "light-weight" protocols and open source tools to implement digital library collections and services technical skills of librarianship creating and managing xml with open source software rock run at ralston, pa introduction to web services top technology trends, implementing sru in perl morgan territory regional park, ca iolug spring program short visit to crl agean sea at kos, greece erie canal at fairport, ny so you want a new website iesr/ockham in manchester indiana library federation annual meeting river lune, lancaster, uk my personal tei publishing system atlantic ocean at hay beach, shelter island, ny open access publishing roman bath, bath, uk symposium on open access and digital preservation jimmy carter water, atlanta, ga european conference on digital libraries, puget sound at port orchard, wa ockham in corvallis, or marys peak spring water ogle lake, brown county state park, in natural bridges state park, monterey bay, santa cruz, ca yellowstone river fountain of youth, st. augustine, fl introduction to search/retrieve url service (sru) portal implementation issues and challenges bath creek at bath, nc open source software in libraries really rudimentary catalog mcn annual conference lake mead at hoover dam lita national forum, open source software in libraries: a workshop mylibrary: a copernican revolution in libraries caribbean sea at lime cay, kingston, jamaica gulf of mexico at galveston island state park mill water at mission san jose, san antonio, tx what is information architecture? texas library association annual meeting, building your library's portal salton sea, ca pacific ocean at big sur, ca pacific ocean at la jolla, ca getting started with xml: a workshop usability for the web: designing web sites that work daiad goes to ann arbor ockham@emory (january, ) web services at oclc access , windsor, ontario lake st. claire at windsor, ontario usability in less than minutes european conference on digital libraries making information easier to find with mylibrary roman forum in rome, italy implementing "light-weight reference models" in mylibrary tanana river at fairbanks, alaska mendenhall glacier at juneau, alaska lancaster square, conwy, wales river teifi at cenarth falls, cenarth, wales atlantic ocean at mwnt, wales atlantic ocean at st. justinians, wales atlantic ocean at roch, wales loch lomond american library association annual meeting, atlanta, ga, stone mountain, atlanta, ga st. joesph river at bristol, in ockham in atlanta dlf in chicago isabella river in the boundry waters canoe area wilderness, mn open source software in libraries asis & t information architecture summit: refining the craft baltimore harbor, baltimore, md what is the open archives initiative? ontario library association (ola) annual meeting, reflection pool, university of notre dame, notre dame, in lake michigan at warren dunes state park, in ohio river at point pleasant, oh open source software in libraries amazon river, peru comparing open source indexers smart html pages with php data services for the sciences: a needs assessment summary report of the research data management study group portal webliography gift cultures, librarianship, and open source software development dbms and web delivery review of some ebook technology cap ' sigir ' mylibrary@ncstate marketing through usability catalogs of the future raleigh-worcester-lansing adaptive technologies sometimes the question is more important than the answer networking languaging ' possibilities for proactive library services systems administration requires people skills communication is the key to our success imagine, if only we had... marketing future libraries springboards for stategic planning eric visits savannah different type of distance education indexing, indexing, indexing mylibrary in your library becoming a -pound gorilla access control in libraries we love databases! computer literacy for librarians pointers searching, searching pointers from amtrak to artemia salina unique collections and fahrenheit creating user-friendly electronic information systems tuileries gardens, paris (france) evaluating index morganagus becoming a world wide web server expert see you see a librarian final report learning to use the tools of the trade cataloging digital mediums readability, browsability, searchability plus assistance listwebber ii on being a systems librarian cataloging internet resources: a beginning tennessee library association clarence meets alcuin extending your html on a macintosh using macro languages adding internet resources to our opacs description and evaluation of the mr. serials process gateways and electronic publishing teaching a new dog old tricks wils' world conference : a travel log ala annual conference: a mini-travel log ties that bind: converging communities - a travel log usain annual conference : a travel log internet for anthropologists webedge: a travel log using world wide web and wais technologies introduction to world wide web servers short trip to duke opportunities for technical services staff email.cgi version . . world-wide web and mosaic: an overview for librarians simple html editor (she) version . alcuin, an ncsu libraries guide implementing tcp/ip communications with hypercard day in the life of mr. d. microphone scripts for searching medlars marc reader: a hypercard script to demystify the marc record random musing: hypernote pro caribbiean sea at robins bay, jamaica lewis browne lewis browne ( - ) lewis browne, my grandmother rebecca tarlow's brother, was a world traveler, author, rabbi, former rabbi, lecturer, socialist and friend of the literary elite (h.g. wells, upton sinclair, sinclair lewis, theodore dreiser, etc.). see wikipedia's entry: http://en.wikipedia.org/wiki/lewis_browne his papers are at the indiana university's lilly archives. some additional materials are at the hebrew university in jerusalem that were given by myna, lewis's ex-wife. see this letter from charlie chaplan (calling lewis an entertaining radio celebrity). here is a copyright renewal database from stanford showing that most of his books did have their copyrights diligently renewed by his sister, rebecca tarlow. here's an article on lewis browne, his friend charlie chaplan, and the supernatural. lewis browne's this believing world was one of four books that guided the writing of the big book of aa. if you go to this link, click download, and move forward in the talk to minute , the speaker describes the books used, studied and referred to while coming up with the information to start aa. the whole talk is interesting, but the browne mentions start here. http://xa-speakers.org/pafiledb.php?action=file&id= . here's the text from some of his books, something went wrong—a summation of modern history, since calvary—an interpretation of christian history, how odd of god—an introduction to the jews, and the world’s great scriptures. btw, some of the books are still in copyright. here's a postcard that lewis wrote from berlin. no atrocities there except for the man who told of them. found at the national library of jerusalem by ruth bachi (a relative). here's an article from , “around the world with a portable—excerpts from a travel diary: ‘pink’ jews of red russia.” here is the entire travel diary from . i happily discovered that my cousin, eric kriss “webbed” the travel diary of . yudelline's who was lewis browne? life and times of lewis browne, my dad’s (edmond mosley) biography of lewis browne in now up... a different perspective. lewis browne quotes you can always find his books on amazon or alibris i'll be posting a number of materials that i have by and about lewis browne. you may contact me if you are interested in him. i'm starting an elist for notifications when i add to this site. if you are on it, and don't want to be, please let me know. if you'd like to be on it or just want to talk about lewis browne, please let me know. (kim mosley, mrkimmosley@gmail.com) books by lewis browne the doj’s criminal probe into tether — what we know – amy castor primary menu amy castor independent journalist about me selected clips contact me blog subscribe to blog via email enter your email address to subscribe to this blog and receive notifications of new posts by email. join , other followers email address: subscribe twitter updates fyi - i'm taking an actual vacation for the next week, so i'll be quiet on twitter and not following the news so mu… twitter.com/i/web/status/ …  day ago rt @davidgerard: news: the senate hates bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network…  day ago rt @franciskim_co: @wublockchain translation https://t.co/hpqefljhpu  day ago rt @wublockchain: the chinese government is cracking down on fraud. they posted a fraud case involving usdt on the wall to remind the publi…  day ago rt @patio : the core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us…  days ago recent comments cryptobuy on binance: fiat off-ramps keep c… steve on binance: a crypto exchange run… amy castor on el salvador’s bitcoin plan: ta… amy castor on el salvador’s bitcoin plan: ta… clearwell trader on el salvador’s bitcoin plan: ta… skip to content amy castor the doj’s criminal probe into tether — what we know early this morning, bloomberg reported that tether executives are under a criminal investigation by the us department of justice.   the doj doesn’t normally discuss ongoing investigations with the media. however, three unnamed sources leaked the info to bloomberg. the investigation is focused on tether misleading banks about the true nature of its business, the sources said. the doj has been circling tether and bitfinex for years now. in november , “three sources” — maybe even the same three sources — told bloomberg the doj was looking into the companies for bitcoin price manipulation.  tether responded to the latest bit of news in typical fashion — with a blog post accusing bloomberg of spreading fud and trying to “generate clicks.”  “this article follows a pattern of repackaging stale claims as ‘news,” tether said. “the continued efforts to discredit tether will not change our determination to remain leaders in the community.” but nowhere in its post did tether deny the claims.  i've read this several times and can't seem to find the, um, denial? pic.twitter.com/bksadeayl — doomberg (@doombergt) july , last night, before the news broke, bitcoin was pumping like crazy. the price climbed nearly %, topping $ , . on coinbase, the price of btc/usd went up $ , in three minutes, a bit after : utc.  after a user placed a large number of buy orders for bitcoin perpetual futures denominated in tethers (usdt) on binance — an unregulated exchange struggling with its own banking issues — the btc/usdt perpetual contract hit a high of $ , at around : utc on the exchange. , bitcoins traded in minutes on tether fraud exchange binance, driving the price to , tethers per bitcoin. something's not right in tether land. pic.twitter.com/ jm ebrksn — bitfinex’ed 🔥 (@bitfinexed) july , bitcoin pumps are a good way to get everyone to ignore the impact of bad news and focus on number go up. “hey, this isn’t so bad. bitcoin is going up in price. i’m rich!” so what is this doj investigation about? it is likely a follow-up to the new york attorney general’s probe into tether — and its sister company crypto exchange bitfinex — which started in .  tether and bitfinex, which operate under the same parent company ifinex, settled fraud charges with the ny ag for $ . million in february. they were also banned from doing any further business in new york. “bitfinex and tether recklessly and unlawfully covered-up massive financial losses to keep their scheme going and protect their bottom lines,” the ny ag said. the companies’ woes started with a loss of banking more than a year before the ny ag initiated its probe.  banking history tether and bitfinex, both registered in the british virgin islands, were banking with four taiwanese banks in . those banks used wells fargo as a correspondent bank to process us dollar wire transfers.  in other words, the companies would deposit money in their taiwanese banks, and those banks would send money through wells fargo out to the rest of the world.  however, in march , wells fargo abruptly cut off the taiwanese banks, refusing to process any more transfers from tether and bitfinex.  about a month later — i would guess, after wells fargo told them they were on thin ice — the taiwanese banks gave tether and bitfinex the boot.   since then, tether and bitfinex have had to rely increasingly on shadow banks — such as crypto capital, a payment processor in panama — to shuffle funds around the globe for them.  they also started furiously printing tethers. in early , there were only million tethers in circulation. today, there are billion tethers in circulation with a big question as to how much actual cash is behind those tethers.   crypto capital partnering with crypto capital turned out to be an epic fail for bitfinex and tether. the payment processor was operated by principals ivan manuel molina lee and oz yosef with the help of arizona businessman reggie fowler and israeli woman ravid yosef — oz’s sister, who was living in los angeles at the time. in april , fowler and ravid were indicted in the us for allegedly lying to banks to set up accounts on behalf of crypto capital. fowler is currently awaiting trial, and ravid yosef is still at large.  starting in early , the pair set up dozens of bank accounts as part of a shadow banking network for crypto capital. some of those banks — bank of america, wells fargo, hsbc, and jp morgan chase — were either based in the us, or in the case of hsbc, had branches in the us, and therefore, fell under the doj’s jurisdiction.  in total, fowler’s bank accounts held some $ million and were at the center of his failed plea negotiation in january . those accounts, along with more frozen crypto capital accounts in poland, meant that tether and bitfinex had lost access to some $ million in funds in . things spiraled downhill from there. molina lee was arrested by polish authorities in october . he was accused of being part of an international drug cartel and laundering funds through bitfinex. and oz yosef was indicted by us authorities around the same time for bank fraud charges. tether stops printing at the beginning of , there were only . billion tethers in circulation. all through the year and into the next, tether kept issuing tethers at greater and greater rates. then, at the end of may , it stopped — and nobody is quite sure of why. pressure from authorities? a cease and desist order?  usually, cease and desist orders are made public. and it is hard to imagine that there would be an order that has been kept non-public since may. one could argue, you don’t want to keep printing dubiously backed stablecoins when you’re under a criminal investigation by the doj. but as i’ve explained in prior posts, other factors could also be at play.  for instance, since binance, one of tether’s biggest customers, is having its own banking problems, it may be difficult for binance users to wire funds to the exchange. and since binance uses usdt in place of dollars, there’s no need for it to acquire an additional stash of tethers at this time. also, other stablecoins, like usdc and busd, have been stepping in to fill in the gap. the doj and tether you can be sure that any info pulled up by the ny ag in its investigation of tether and bitfinex has been passed along to the doj and the commodities and futures trading commission — who, by the way, subpoenaed tether in late .  coincidentally — or not — bitcoin saw a price pump at that time, too. it went from around $ , on dec. , , the day before the subpoena was issued, to nearly $ , on dec. , — another attempt to show that the bad news barely had any impact on the bitcoin price.  tether relies on confidence in the markets. as long as people believe that tether is fully backed, or that tether and bitfinex probes won’t impact the price of bitcoin, the game can continue. but if too many people start dumping bitcoin in a panic and rushing toward the fiat exits, the truth — that there isn’t enough cash left in the system to support a tsunami of withdrawals — will be revealed, and that would be especially bad news for tether execs.  will tether’s operators be charged with criminal actions any time soon? and which execs is the doj even investigating? the original operators of bitfinex and tether — aka “the triad” — are chief strategy officer phil potter, ceo jan ludovicus van der velde and cfo giancarlo devasini. phil potter supposedly pulled away from the operation in mid- . and nobody has heard from van der velde or devasini in a long, long time. now, the two main spokespersons for the companies are general counsel stuart hoegner and cto paolo ardoino, who give lots of interviews defending tether and accusing salty nocoiners like me of fud.   tracking down bad actors takes a lot of coordination. recall that the doj had to work with authorities in different countries to finally arrest the operators of liberty reserve, a costa rica-based centralized digital currency service that was used for money laundering. similar to liberty reserve, tether is a global operation and all of the front persons associated with tether — except for potter who lives in new york — currently reside outside of the us.  it may still take a long while to completely shut down tether and give it the liberty reserve treatment. but if the doj files criminal charges against tether execs, that is at least a step in the right direction. read more:  the curious case of tether — a complete timeline nocoiner predictions: will be a year of comedy gold  if you like my work, please subscribe to my patreon for as little as $ a month. your support keeps me going. share this: twitter facebook linkedin like this: like loading... bloombergdepartment of justicedojliberty reservepaolo ardoinostuart hoegnertetherwells fargo posted on july , july , by amy castor in blogging post navigation previous postnews: regulators zero in on stablecoins, el salvador’s colón-dollar, tether printer remains paused next postpodcast: ‘target letter, tether’ (amy castor and david gerard) leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (address never made public) name website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. create a website or blog at wordpress.com   loading comments...   write a comment... email name website %d bloggers like this: ptsefton.com ptsefton.com fiir data management; findable inaccessible interoperable and reusable? this is a work in progress post i'm looking for feedback on the substance - there's a comment box below, email me, or see me on twitter: @ptsefton. [update - - : had some comments from michael d'silva at aarnet - have added a couple of things below.] i am posting this now because … arkisto: a repository based platform for managing all kinds of research data university of technology sydney the university of melbourne ' title=' ' border=' ' width=' %'/> this presentation by peter sefton, marco la rosa and michael lynch was delivered at open repositories conference on - - (australian time) - marco la rosa did most of the talking, with help from michael lynch. we want to emphasise that this presentation is based on the fair principles that data … research object crate (ro-crate) update this presentation by peter sefton and stian soiland-reyes was presented by peter sefton at the open repositories conference on - - (in australia). ro-crate has been presented at open repositories several times, including a workshop in , so we won’t go through a very detailed introduction but we will … infrastructure and what do we really want for dmps [updated - - after gail mcglinn and i got home from the pub and she read this through; fixed several terrible typos and the odd incomplete sentence - i knew i should not have let the dog proofread the version we put out this afternoon (he just wanted to go for a … what did you do in the lockdowns pt? part - music videos post looks too long? don't want to read? here's the summary. last year gail mcglinn* and i did the lockdown home-recording thing. we put out at least one song video per week for a year (and counting - we're up to over weeks). searchable, sortable website here. we learned … fair data management; it's a lifestyle not a lifecycle i have been working with my colleague marco la rosa on summary diagrams that capture some important aspects of research data management, and include the fair data principles; that data should be findable, accessible, interoperable and reusable. but first, here's a rant about some modeling and diagramming styles and trends … research data management looking outward from it this is a presentation that i gave on wednesday the nd of december at the aero (australian eresearch organizations) council meeting at the request of the chair dr carina kemp). carina asked: it would be really interesting to find out what is happening in the research data management space … redundant. thursday december was my last day at uts as the eresearch support manager. the position was declared to be redundant under the "voluntary separation program". i guess the corporate maths works for uts and it works for me. thanks covid- . this is the third redundancy for me, and … an open, composable standards–based research eresearch platform: arkisto this is a talk delivered in recorded format by peter sefton, nick thieberger, marco la rosa and mike lynch at eresearch australasia . also posted on the uts eresearch website. ' title=' ' border=' ' width=' %'/> research data from all disciplines has interest and value that extends beyond funding cycles and must continue to be managed … you won't believe this shocking semantic web trick i use to avoid publishing my own ontologies! will i end up going to hell for this? [update - as soon as this went live i spotted an error in the final example and fixed it]. in this post i describe a disgusting, filthy, but possibly beautiful hack* i devised to get around a common problem in data description using semantic web techniques, specifically json-ld and schema.org … none none none none rapid communications skip to main | skip to sidebar rapid communications rapid, but irregular, communications from the frontiers of library technology wednesday, april , mac os vs emacs: getting on the right (exec) path one of the minor annoyances about using emacs on mac os is that the path environment variable isn't set properly when you launch emacs from the gui (that is, the way we always do it). this is because the mac os gui doesn't really care about the shell as a way to launch things, but if you are using brew, or other packages that install command line tools, you do. apple has changed the way that the path is set over the years, and the old environment.plist method doesn't actually work anymore, for security reasons. for the past few releases, the official way to properly set up the path is to use the path_helper utility program. but again, that only really works if your shell profile or rc file is run before you launch emacs. so, we need to put a bit of code into emacs' site_start.el file to get things set up for us: (when (file-executable-p "/usr/libexec/path_helper") (let ((path (shell-command-to-string "eval `/usr/libexec/path_helper -s`; echo -n \"$path\""))) (setenv "path" path) (setq exec-path (append (parse-colon-path path) (list exec-directory))))) this code runs the path_helper utility, saves the output into a string, and then uses the string to set both the path environment variable and the emacs exec-path lisp variable, which emacs uses to run subprocesses when it doesn't need to launch a shell. if you are using the brew version of emacs, put this code in /usr/local/share/emacs/site-lisp/site-start.el and restart emacs. posted by david j. fiander at : am no comments: tuesday, january , finding isbns in the the digits of π for some reason, a blog post from about searching for isbns in the first fifty million digits of π suddenly became popular on the net again at the end of last week (mid-january ). the only problem is that geoff, the author, only looks for isbn- s, which all start with the sequence " ". there aren't many occurrences of " " in even the first fifty million digits of π, so it's not hard to check them all to see if they are the beginning of a potential isbn, and then find out if that potential isbn was ever assigned to a book. but he completely ignores all of the isbn- s that might be hidden in π. so, since i already have code to validate isbn checksums and to look up isbns in oclc worldcat, i decided to check for isbn- s myself. i don't have easy access to the first fifty million digits of π, but i did manage to find the first million digits online without too much difficulty. an isbn- is a ten character long string that uniquely identifies a book. an example is " - - - ". the dashes are optional and exist mostly to make it easier for humans, just like the dashes in a phone number. the first character of an isbn- indicate the language in which the book is published: and are for english, is for french, and so on. the last character of the isbn is a "check digit", which is supposed to help systems figure out if the isbn is correct or not. it will catch many common types of errors, like swapping two characters in the isbn: " - - - " is invalid. here are the first one hundred digits of π: . to search for "potential (english) isbn- s", all one needs to do is search for every or in the first , digits of π (there is a " " three digits from the end, but then there aren't enough digits left over to find a full isbn, so we can stop early) and check to see if the ten digit sequence of characters starting with that or has a valid check digit at the end. the sequence " ", highlighted in red, fails the test, because " " is not the correct check digit; but the sequence " " highlighted in green is a potential isbn. there are approximately , zeros and ones in the first million digits of π, but "only" , of them appear at the beginning of a potential isbn- . checking those , potentials against the worldcat bibliographic database results in , valid isbns. the first one is at position , : isbn , for the book the evolution of weapons and warfare by trevor n. dupuy. the last one is at position , : isbn for the book exploring language assessment and testing : language in action by anthony green. here's the full dataset. posted by david j. fiander at : pm comments: saturday, march , software upgrades and the parable of the windows a librarian friend of mine recently expressed some surprise at the fact that a library system would spend almost $ , to upgrade their ils software, when the vendor is known to be hostile to its customers and not actually very good with new development on their products. the short answer is that it's easier to upgrade than to think. especially when an "upgrade" will be seen as easier than a "migration" to a different vendor's system (note: open ils platforms like evergreen and koha may be read as being different vendors for the sake of convenience). in fact, when an ils vendor discontinues support for a product and tells its customers that they have to migrate to another product if they want to continue to purchase support, it is the rare library that will take this opportunity to re-examine all its options and decide to migrate to a different vendor's product. a simple demonstration of this thinking, on a scale that most of us can imagine, is what happened when my partner and i decided that it was time to replace the windows in our house several years ago. there are a couple of things you need to know about replacing the windows in your house, if you've never done this before: most normal folks replace the windows in their house over the course of several years, doing two or three windows every year or two. if one is replacing the huge bay window in the living room, then that might be the only window that one does that year. windows are expensive enough that one can't really afford to do them all at once. windows are fungible. for the most part, one company's windows look exactly like another company's. unless you're working hard at getting a particular colour of flashing on the outside of the window, nobody looking at your house from the sidewalk would notice that the master bedroom window and the livingroom window were made by different companies. like any responsible homeowners, we called several local window places, got quotations from three or four of them for the windows we wanted replaced that year, made our decision about which vendor we were going to use for the first round of window replacements, and placed an order. a month or so later, on a day that the weather was going to be good, a crew from the company arrived, knocked big holes in the front of our house to take out the old windows and install the new ones. a couple of years went by, and we decided it was time to do the next couple of windows, so my partner, who was always far more organized about this sort of thing that me, called three or four window companies and asked them to come out to get quotations for the work. at least one of the vendors declined, and another vendor did come out and give us a quote but he was very surprised that we were going through this process again, because normally, once a householder has gone through the process once, they tend to use the same window company for all the windows, even if several years have passed, or if the type of work is very different from the earlier work (such as replacing the living room bay window after a couple of rounds of replacing bedroom windows). in general, once a decision has been made, people tend to stick with that plan. i think it's a matter of, "well, i made this decision last year, and at the time, this company was good, so they're probably still good," combined, perhaps, with a bit of thinking that changing vendors in mid-stream implies that i didn't make a good decision earlier. and there is, of course, always the thought that it's better to stick with the devil you know that the one you don't. posted by david j. fiander at : pm comments: sunday, january , using qr codes in the library this started out as a set of internal guidelines for the staff at mpow, but some friends expressed interest in it, and it seems to have struck a nerve, so i'm posting it here, so it is easier for people to find and to link to. using qr codes in the library qr codes are new to north american, but have been around for a while in japan, where they originated, and where everybody has a cellphone that can read the codes. they make it simpler to take information from the real world and load it into your phone. as such, they should only be used when the information will be useful for somebody on the go, and shouldn't normally be used if the person accessing the information will probably be on a computer to begin with. do use qr codes: on posters and display projectors to guide users to mobile-friendly websites. to share your contact information on posters, display projectors, or your business card. this makes it simpler for users to add you to their addressbook without having to type it all in. in display cabinets or art exhibits to link to supplementary information about the items on display. don't use qr codes: to record your contact information in your email signature. somebody reading your email can easily copy the information from your signature to their addressbook. to share urls for rich, or full-sized, websites. the only urls you should be sharing via qr codes for are mobile-friendly sites. when using qr codes: make sure to include a human readable url, preferably one that's easy to remember, near the qr code for people without qr code scanners to use. posted by david j. fiander at : pm no comments: monday, april , a manifesto for the library last week john blyberg, kathryn greenhill, and cindi trainor spent some time together thinking about what the library is for and what its future might hold. the result of that deep thinking has now been published on john's blog under the title "the darien statements on the library and librarians." opening with the ringing statement that the purpose of the library is to preserve the integrity of civilization they then provide their own gloss on what this means for individual libraries, and for librarians. there is a lively discussion going on in the comments on john's blog, as well as less thoughtful sniping going on in more "annoying" blogs. i think that this is something that will engender quite a bit of conversation in the month's to come. posted by david j. fiander at : pm no comments: sunday, april , i'm a shover and maker! since only a few people can be named "movers and shakers" by library journal, joshua neff and steven lawson created the "shovers and makers" awards "for the rest of us," under the auspices of the not entirely serious library society of the world. i'm very pleased to report that i have been named a shover and maker (by myself, as are all the winners). the shovers and makers awards are a fun way to share what we've done over the past year or two and they're definitely a lot simpler than writing the annual performance review that hr wants. think of this as practice for writing the speaker's bio for the conference keynote you dream of being invited to give. posted by david j. fiander at : am no comments: sunday, january , lita tears down the walls at ala midwinter , jason griffey and the lita folks took advantage of the conference center's wireless network to provide quick and easy access to the top tech trends panel for those of us that couldn't be there in person. the low-bandwidth option was a coveritlive live-blogging feed of comments from attending that also included photos by cindi trainor, and a feed of twitters from attendees. the high-bandwidth option was a live (and recorded) video stream of the event that jason captured using the webcam built into his laptop. aside from the lita planned events, the fact that we could all sit in meant that there were lots of virtual conversations in chat rooms and other forums that sprung up as people joined in from afar. unfortunately, because my sunday morning is filled with laundry and other domestic pleasures, i wasn't able to join in on the "live" chatter going on in parallel with the video or livebloggin. owing to funding constraints and my own priorities, my participation at ala is limited. i've been to lita forum once, and might go again, but i focus more on the ola other regional events. this virtual option from lita let me get a peek at what's going on and hear what the "big thinkers" at lita have to say. i hope they can keep it up, and will definitely be talking to local folks about how we might be able to emulate lita in our own events. posted by david j. fiander at : pm no comments: older posts home subscribe to: posts (atom) about me david j. fiander i'm a former software developer who's now the web services librarian at a university. the great thing about that job title is that nobody knows what i do. view my complete profile last.fm weekly chart blog archive ▼  ( ) ▼  april ( ) mac os vs emacs: getting on the right (exec) path ►  ( ) ►  january ( ) ►  ( ) ►  march ( ) ►  ( ) ►  january ( ) ►  ( ) ►  april ( ) ►  january ( ) ►  ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  july ( ) ►  june ( ) ►  march ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) this work is licensed under a creative commons attribution-noncommercial-share alike . canada license.   none none on-chain vote buying and the rise of dark daos hacking, distributed on-chain vote buying and the rise of dark daos on-chain voting voting e-voting trusted hardware identity selling ethereum july , at : pm philip daian, tyler kell, ian miers, and ari juels ← older newer → blockchains seem like the perfect technology for online voting. they can act as “bulletin boards,” global ledgers that were hypothesized (but never truly realized) in decades of e-voting research. better still, blockchains enable smart contracts, which can execute on-chain elections autonomously and do away with election authorities. unfortunately, smart contracts aren’t just good for running elections. they’re also good for buying them. in this blog post, we’ll explain how and why. as an example, we’ll present a fully implemented, simple vote buying attack against the popular on-chain carbonvote system. we’ll also discuss how trusted hardware enables even more powerful vote buying techniques that seem irresolvable even given state-of-the art cryptographic voting protocols. finally, we introduce a new form of attack called a dark dao, not to be confused with the “dark dao” the same way daos should not be confused with the dao.  a dark dao is a decentralized cartel that buys on-chain votes opaquely (“in the dark”). we present one concrete embodiment based on intel sgx. in such an attack, potentially nobody, not even the dao’s creator, can determine the dao’s number of participants, the total amount of money pledged to the attack, or the precise logic of the attack: for example, the dark dao can attack a currency like tezos, covertly collecting coins until it reaches some hidden threshold, and then telling its members to short the currency.  such a dark dao also has the unique ability to enforce an information asymmetry by sending out, for example, deniable short notifications: members inside the cartel would be able to verify the short signal, but themselves could generate seemingly authentic false signals to send to outsiders. the existence of trust-minimizing vote buying and dark dao primitives imply that users of all on-chain votes are vulnerable to shackling, manipulation, and control by plutocrats and coercive forces.  this directly implies that all on-chain voting schemes where users can generate their own keys outside of a trusted environment inherently degrade to plutocracy, a paradigm considered widely inferior to democratic models that such protocols attempt to approximate on-chain. all of our schemes and attacks work regardless of identity controls, allowing user actions to be freely bought and sold.  this means that schemes that rely on user-generated keys bound to user identities, like uport or circles, are also inherently and fundamentally vulnerable to arbitrary manipulation by plutocrats.  our schemes can also be repurposed to attack proof of stake or proof of work blockchains profitably, posing severe security implications for all blockchains. blockchain voting today blockchain voting schemes abound today. there’s votem, an end-to-end verifiable voting scheme that allows voting using mobile devices and leverages the blockchain as a place to securely post and tally the election results. remix, the popular smart contract ide, offers an election-administering smart contract as its training example. yet more examples can be found here ( ), here ( ), and here ( ). on-chain voting schemes face many challenges, privacy, latency, and scaling among them. none of these is peculiar to voting, and all will eventually be surmountable. vote buying is a different story. in political systems, vote buying is a pervasive and corrosive form of election fraud, with a substantial history of undermining election integrity around the world. sometimes, the price of a vote is a glass of beer. thankfully, as scholars have observed, normal market mechanisms usually break down in vote buying schemes, for three  reasons. first, vote buying is in most instances a crime. in the u.s., it’s punishable under federal law. second, where secret ballots are used, compliance is hard to enforce. a voter can simply drink your beer, and cast her ballot in secret however she likes. third, even if a voter does sell their vote, there is no guarantee the counter-party will pay. no such obstacles arise in blockchain systems. vote buying marketplaces can be run efficiently and effectively using the same powerful tool for administering elections: smart contracts. pseudonymity and jurisdictional complications, as always, provide (some) cover against prosecution. in general, electronic voting schemes are in some ways harder to secure against fraud than in-person voting, and have been the subject of general and academic interest for many years.  one of the fundamental building blocks was introduced early by david chaum, providing anonymous mix networks for messages which could be anonymously sent by participants with receipts of inclusion.  such end-to-end verifiable voting systems, where users can check that their votes are correctly counted without sacrificing privacy, are not just the realm of theoreticians and have actually been used for binding elections. later work by benaloh and tuinstra took issue with electronic voting schemes, noting that they offered voters a “receipt” that provided cryptographic proof of which way a given vote had been cast.  this would allow for extremely efficient vote buying and coercion, clearly undesirable properties. the authors defined a new property, receipt-freedom, to describe voting schemes where no such cryptographic proof was possible. further work by juels, catalano, and jakobsson modeled even more powerful coercive adversaries, showing that even receipt-free schemes were not sufficient to prevent coercion and vote buying.  this work defined a new security definition for voting schemes called “coercion resistance”, providing a protocol where no malicious party could successfully coerce a user in a manner that could alter election results. in their work, juels et. al note that “the security of our construction then relies on generation of the key pairs… by a trusted third party, or, alternatively, on an interactive, computationally secure key-generation protocol such as [ ] between the players”.  such “trusted key generation”, “trusted third party”, or “trusted setup” assumptions are standard in the academic literature on coercion resistant voting schemes. unfortunately, these requirements do not translate to the permissionless model, in which nodes can come and leave at any time without knowing each other a priori.  this (somewhat) inherently means users generate their own keys in all such deployed systems, and cannot take advantage of trusted multiparty key generation or any centralized key service arbiter. the blockchain space today, with predictable results, continues its tradition of ignoring decades of study and instead opts to implement the most naive possible form of voting: directly counting coin-weighted votes in a plutocratic fashion, stored in plain text on-chain.  unfortunately, it is not clear that better than such a plutocracy is achievable on-chain. we show that the permissionless model is fundamentally hostile to voting. despite any identity or second-layer based mitigation attempts, all permissionless voting systems (or schemes that allow users to generate their own key in an untrusted environment) are vulnerable to the same style of vote buying and coercion attacks.  many vote buying attacks can also be used for coercion, shackling users to particular voting choices by force. that's a nice on-chain vote you've got there... it is worth noting that the severity of bribery attacks in such protocols was partially explored by vitalik buterin, though concrete mechanisms were not provided.  here we describe frictionless mechanisms useful for vote, identity buying, coercion, and coordination at a high level and discuss the implications of these particular mechanisms. attack flavors consider a very simple voting scheme: holders of a token get one vote per token they hold and can change their votes continually until some closing block number.  we’ll use this simple “ezvote” scheme to build intuition for how our attacks can work in any on-chain voting mechanism. there are several possible escalating attack flavors of such a scheme. simple smart contracts the simplest low-coordination attack on on-chain voting systems involves vote buying smart contracts.  such smart contracts would simply pay users upon a provable vote for one option (or to participate in the vote, or to abstain from the vote if the vote is not anonymous).  in ezvote, the smart contract could be a simple contract that holds your erc until after the end date, votes yes, and returns it to you; all guarantees in the contract could be enforced by the underlying blockchain. such a scheme has advantages in that it requires only the trust assumptions already inherent in the underlying system, but has substantial disadvantages as well. for one, it is likely possible to publicly tell how many votes are purchased after the election is over, as this is required to handle the flow of payments in today’s smart contract systems. also, the in-platform nature of the bribe opens it to censorship by parties interested in preserving the health of the underlying platform/system. depending on the nature of the voting scheme and the underlying protocol, there may be some workarounds for these downsides. voters could for example provide a ring signature proving to a vote buyer that they are in a list of voters who votes yes in exchange for payments.  we leave the implementation details and generalizability of such schemes open. in general, any mechanism for private smart contracts can also be used for private vote buying, solving the public nature of a smart contract based attack; cryptographically an equivalent would be the vote buyer and seller generating a secret key for funds storage via mpc together, signing two transactions: a yes vote and a transaction that released funds to the vote seller after the end of the interval.  the vote seller would move funds to this key only after possessing the transaction guaranteeing a refund and payment. this would look similar to previous work on distributed certificate generation, adding security analysis for ensuring fairness.  a naive implementation of such a scheme would encumber a users’ use of funds for other purposes during the vote (such actions are possible but require cooperation on behalf of the vote buyer; alternatively, a trusted/bonded escrow party can be used). trusted hardware buying an even more concerning vote buying attack scheme involves the use of trusted hardware, such as intel sgx.  such hardware has a key feature called remote attestation. essentially, if alice and bob are communicating on the internet, the trusted computing achieved by sgx allows alice to prove to bob that she is running a certain piece of code. trusted hardware is usually seen as a way to prove that you are running code that will not be malicious: for example, it is used in drm to prove that a user will not copy files that are only temporarily licensed to them, like movies.  instead, we will use trusted hardware to shackle cryptocurrency users, paying or forcing them to use cryptocurrency wallets based on trusted hardware that provably restrict their space of allowed behaviors (e.g. by forcing them not to vote a certain way in an election) or allow the vote buyer trust-minimized but limited use of a user’s key (e.g. a vote buyer can force a user to sign “i vote a”, but cannot steal or spend a user’s money). the simplest way to deploy such technology for vote buying is to simply allow users to prove they are running a vote buyer’s malicious wallet code in exchange for a payment, secured on both sides by remote attestation technology. in our “ezvote” example, a user would simply use a cryptocurrency wallet loaded on intel’s sgx, running the vote buyer’s program.  sgx would guarantee to the user that the wallet could never steal the user’s money (unless intel colludes with the vote buyer). the user can provably use the wallet for everything they can do with a normal ethereum wallet, including moving their money out (though in this case they would not be paid).  the user runs their own wallet, and does not need to trust a third party for control or security of their funds. the user may not need to trust even intel or the trusted hardware provisioner for security of their funds, as they can compile their own wallet! when a predefined trigger condition occurs, such an sgx program would automatically vote on ezvote as the vote buyer commands, and send a receipt to the vote buyers.  the vote buyer would itself be run an sgx enclave that maintains a total of all users who claim to have voted yes, and a list of their addresses. given trust in sgx, the vote buyer need not see the full list of member users or know the total pledged amount.  at the end of the vote, the vote buyer’s enclave would pay all the users who have not moved their funds or changed their vote. this would be accomplished by the enclave periodically posting a merkle root summarizing users to be paid on-chain, providing proof to each user that they will eventually be paid.  users can claim payment after the expiry of some period by providing a proofs of inclusion in the posted merkle history. in some particularly vulnerable vote designs, an sgx enclave can increase its efficiency by simply accumulating “yes” votes from users up-front as transactions, publishing and providing payment for them at the conclusion of the vote. hidden trusted hardware cartels (dark daos) a more concerning attack arises when trusted hardware is combined with the idea of a dao, spawning a trustless organization whose goal centers on manipulating cryptocurrency votes. one example of a basic dark dao. the figure above outlines one possible architecture. vote buyers would support the dao by running a network of sgx enclaves that themselves execute a consensus protocol (shown here as a dark cloud to indicate its invisibility from outside).  users would communicate with this enclave network, and supply proof that they are running a “vote buying” (e.g.) ethereum wallet with a current balance of x coins. this “evil wallet” attests to running the attack code a vote buyer is paying for, and the vote buyer attests that they are running code guaranteed to pay the user at the end of the attack (likely in combination with a smart contract-based protocol that cryptoeconomically enforces liveness and honesty). the vote buyers can keep track of how many total funds are pledged to vote through the system, hiding this fact from the outside world using privacy features built into sgx.  users can receive provable payouts for participating in such a system, achieving a property similar to all-or-nothing settlement in sgx-based decentralized exchanges.  vote buyers can get a provable guarantee that clients will never issue votes that contradict their desired voting policy. what makes such an organization dark is that the vote buyers need not reveal how many users are participating in the system to anybody (even potentially themselves).  the system could simply accumulate users, paying users for running the attacker’s custom wallet software, until some threshold (of e.g. coins held by such software) is reached that activates an attack; in this manner, failed attempts need not be detectable.  more damagingly, the individual incentives of any small users clearly point towards joining the system. if small users believe their vote doesn’t matter, they are likely to take the payoff with no perceived marginal downside. this is especially the case in on-chain votes, which are empirically observed to have extremely low turnout. users that don’t vote may be ideal targets for selling their votes. dark dao operators can further muddy the waters by launching attacks on choices the vote buyers actually oppose as potential false flag operations or smear campaigns; for example, bob could run a dark dao working in alice’s favor to delegitimize the outcome of an election bob believes he is likely to lose.  the activation threshold, payout schedule, full attack strategy, number of users in the system, total amount of money pledged to the system, and more can be kept private or revealed either selectively or globally, making such daos ultimately tunable for structured incentive changes. because the organization exists off-chain, no cartel of block producers or other system participants can detect, censor, or stop the attack. such a dark organization has several immediate practical drawbacks.  the primary one is that for use on intel sgx, a license would need to be granted by intel, an unlikely event for malicious software.  furthermore, side channel, hidden software backdoor, or platform attacks in intel's sgx or the auditing of the dark dao wallet could weaken the scheme, though as trusted hardware continues to advance and develop, it is highly likely the cost of such attacks will increase substantially.  eventually, we expect other trusted hardware to provide the remote attestation capabilities of intel sgx, meaning that sgx will not be required for such an attack; this is why we use “sgx” interchangeably with “trusted hardware”. for example, remote attestation is achievable on some android secure processors.  our schemes work on any hardware device allowing for confidential data and remote attestation. attacks on classic schemes: carbonvote & eip to prove the efficacy of these vote buying strategies, we first look at  governance-critical coinvotes performed in existing cryptocurrency systems.  perhaps the most important such vote was the dao carbonvote.  the operation of this vote was simple: accounts sent money to an address to vote yes, and another to vote no.  each address was a contract that logged the vote of a given address. the carbonvote frontend then tallied the votes, and showed the net balances of all accounts that had voted yes and/or no.  later votes superseded earlier ones, allowing users to change their minds. at the end of the vote, a snapshot was taken of support and used to gauge community sentiment. this voting style is being reused for other controversial ecosystem issues, including eip- . one possible trust-minimizing vote buying smart contract in this framework involves the use of escrow; users send ether to an erc token contract that holds the ether until the end of the vote.  for each ether they deposit, users receive votecoin. the contract is pre-programmed to vote yes at the end of the vote with % of the user ether held.  after the vote ends, each votecoin token becomes fully refundable for the original ether that created it.  users get back their original ether, plus any bribes that vote buyers wish to pay them for this service. we have implemented a full, open-source proof of concept of such a contract, enabling any vote buyers to contribute funds to the contract’s bribepool.  users can be paid out from bribepool by temporarily locking their ether in the contract, and can reclaim % of their ether at the end of the target vote. an attack can pay vote sellers out of bribepool upfront (once they lock the coins, the votes are guaranteed), as dividends over time, or both. code of the vote buying ethereum smart contract for the dao carbonvote users can also sell their votecoin after locking up their ether, essentially making votecoin a tokenized vote buying derivative.  vote sellers can then instantly unload their exposure to any risks introduced by funds lockup to parties that are indifferent to the vote’s outcome: because each erc is programatically guaranteed to eventually receive all original eth, this essentially creates a one-way-only funnel from the base asset into a derivative asset dedicated to voting a predefined way.   buyers who are uninterested in the vote's outcome should always lock their eth if guaranteed a non-negative payoff, and essentially have an option to later unload onto other similarly uninterested buyers. if dividends from bribepool are paid over time to votecoin in addition to upfront, these derivative tokens can even be used to speculate on the success of the attack itself. this smart contract can be simplified with the use of oracles such as town crier (multiple oracles, prediction markets, etc. can be combined as well).  because the carbonvote system publishes results including full voter logs on etherscan, it is relatively trivial to check which way someone has voted using any external web scraping oracles, paying them if their vote included in the final snapshot agreed with the buyers’ preference. a dark dao-like model can also trivially be used. each user simply runs a wallet that, some time after each transfer transaction, also votes the desired way on the carbonvote (in fact this may become standard behavior for many wallets).  the user is only paid if such votes are registered, so the user is incentivized to make sure this vote transaction is included on-chain. there is no way for the network to tell how many votes in a given carbonvote are generated by such a vote buying cartel, and how many are legitimate. inherent in any of these schemes is the ability to minimize trust when pooling assets across multiple vote buyers; bribery smart contracts could simply allow anyone to pay into the bribepool, and sgx networks can be architected similarly for open participation. some schemes, such as the eip vote, have even more severe problems.  in these schemes, if a user votes twice, the later of such votes is chosen. a simple and severe attack is then to simply collect signatures on both “yes” and “no” votes from a user, spamming the chosen signature towards the end of the election period and relying on an ability to overwhelm the blockchain to ensure that most such votes persist.  alternatively, because contract deployers are able to vote for all the funds in a given contract, another attack is to simply force a user to use a contract-based wallet for the duration of the vote that is deployed by the vote buyer, who can then control the votes of all funds locked in contracts arbitrarily without custody of these funds. bitcoin is not immune to this problem either. bitcoin’s community often leans on coin-votes, and similar vote buying schemes can be applied (as either ethereum smart contracts as in this work, or in dark dao-style; bitcoin itself does not provide native support for sufficiently rich contracts to buy votes). beyond voting - attacking consensus astute readers may point out that all permissionless blockchains inherently rely on some form of permissionless voting, namely the consensus algorithm itself.   every time a blockchain comes to global consensus on some attributes of state, what is taking place is essentially a permissionless (often coin or pow-weighted) vote in a permissionless setting. it is perhaps no surprise that “vote buying” has seen some exploration in these contexts.  for example, smart contracts on ethereum can be used to attack ethereum and other blockchains through censorship, history revision, or incentivizing empty blocks.  such attacks work directly on the proof-of-work vote itself, bribing miners according to their weighted work. there is little reason to believe that proof of stake systems would be immune to similar attacks, especially in the presence of complex delegated voting structures whose incentives may be unclear and whose formal analysis may be incomplete or nonexistent. a disturbing concept related to our exploration of dark daos for vote buying is what we term the “fishy dao”, named after the classic flash game.  in this (super fun!) game, you start out as a small fish. the rules are simple; you can eat smaller competitor fish, but not fish the same size as or larger than you. you get a little bit bigger after each meal, until you eventually (if you are lucky) grow to dominate the ocean.  a modern equivalent that doesn’t require flash and adds networking is agar.io. it’s like fishy, but the small fish can gang up on the bigger ones too! a fishy dao would use dark dao-like technology as described above to do the same for blockchains.  using sgx, fishy dao members can receive non-transferable (dao members can verify message authenticity, but non-members cannot tell if a message is forged) notifications when an attack threshold is reached, allowing them to short currency markets shortly before such an attack.  each blockchain fishy dao attack brings some profit to fishy dao, and the ensuing publicity of even failed attacks gives fishy dao notoriety with the profit-seeking but perhaps unethical (in some frameworks). if fishy dao fails to achieve required thresholds, fishy dao simply fades away and refunds its participants, potentially but not necessarily burning some amount of their money to incentivize them to recruit participation. fishy dao requires dark dao technology, as if performed in the open with a smart contract, observable participation rates would provide market signals to the underlying blockchain’s price, rendering the attack unprofitable by allowing risk to be priced in.  it is the cryptographically enforceable information asymmetry between dao members and wider ecosystem participants that makes such an attack feasible. other applications note that dark daos have implications far beyond the above.  consider for example a dark dao that aimed to profitably buy users’ basic income identities, paying up front at a small fee to receive a user’s regular basic income payments.  or a dark dao for getting through credit checks secured on key-based identities by leasing (with trust minimized limitations) such keys from users with good credit. or a dark dao that runs an evil mining pool, provably attacking an asic-based proof of work cryptocurrency with an unstoppable attack pool of potentially undetectable size. one can also imagine that with identity, there may be social safeguards against buying behavior in the identity system itself.  for example, some identity systems may allow a user to show up in person to revoke or manage identities, which could socially circumvent automated technical safeguards against identity theft.  there are still ways around this: the classic solution in loans is through collateral. potentially a "bondsman" like business could also provide social guarantees of repayment through physical/legal intimidation and contract for users who cannot afford collateral.  payday loan and bail bond establishments would be ideally suited for that kind of business if such a permissionless basic income system were ever deployed alongside current market systems, at least in the us (in many other places there are likely even less savory institutions that could be willing to step in for an appropriate cut). the coordination space of mechanisms in blockchains is large, and the environment hostile.  all voting or financially incentivized identity-based schemes should be very careful to consider the implications of the underlying permissionless model on long-term viability, scalability, and security. core insights maybe you are an academic skimming this article, or maybe an interested user wondering exactly what this all means.  there are a few interesting and very surprising (in the research literature) insights to be gleaned from our thought experiments above: permissionless e-voting *requires* trusted hardware. perhaps the most surprising result is this one. in any model where users are able to generate their own keys (required for the "permissionless" model), low coordination bribery attacks are inherently possible using trusted hardware as described above. the only defense from this is more trusted hardware: to know a user has access to their own key material (and therefore cannot be coerced or bribed), some assurance is required that the user has seen their key. trusted hardware can do this through either a trusted hardware token setup channel (similar to governments using electronic votes for democracy), or through an sgx-based system that guarantees that any voters have had their key material revealed to whatever operating system they are running. this inherently implements the kind of trusted setup/generation assumptions academic e-voting schemes have been using for years. clearly, in the presence of trusted hardware, such assumptions are required for any vote, and votes can be provably bought/sold/bribed/coerced with low friction in the absence of this assumption, a surprising result with severe implications in on-chain voting. the space of voting and coordination mechanisms is massive and extremely poorly understood. as explored through concrete examples on how to handle e.g. smart contracts voting and vote changes on ethereum, it is clear that a wide range of design decisions fundamentally alters the incentive structures of voting mechanisms (we explore these in appendix a below). these mechanisms are extremely complex, and can have their incentive structures altered by other coordination mechanisms like smart contracts and trusted hardware-based daos. the properties of these mechanisms, especially when multiple such mechanisms interact or are actively attacked by resourced actors, is extremely poorly understood. no mechanism of this kind should be used for direct on-chain decision making any time soon. the same class of vote buying attacks works for any identity system. these attacks are not only for votes. imagine an identity system which gives users the right to a basic income, paid weekly. i can simply pay you cash up front to buy your identity and therefore share of income for the next year, and indeed should do so if my time value of money is lower than yours (as wealth asymmetries often imply). this is the case for any system involving identity: with relatively low trust, the behavior of user identities can be constrained, and such constraints can be bought and sold on the open market. this has severe and fundamental impact on the robustness of any on-chain economic mechanism with a permissionless identity component. on-chain voting fundamentally degrades to plutocracy. voting and democracy fundamentally relies on secret ballot assumptions and identity infrastructure that exists only in meatspace. these assumptions do not carry over to blockchains, making the same techniques fundamentally broken in a permissionless model. external, even trusted, identity systems again do not address the issue as long as users can generate their own keys (see above). hard fork-based governance provides users the only exit from such plutocracy. a natural question to ask given the above is whether we've already arrived at plutocracy. the answer is "probably not". there is some evidence that the ad-hoc, informal, fork-based governance models that govern blockchains like bitcoin and ethereum actually provide robust user rights protection. in this model, any upgrades must offer the user an active choice, and groups of users can choose to opt out if disagreeing with rule changes. on-chain voting, on the other hand, creates a natural default that, especially when combined with inattentive or uncaring users, can create strong anti-fork inertia around staying with the coinvote. multiple blockchains interacting can break the incentive compatibility of all chains. importantly and critically, the fishy dao style attack we've explored shows that multiple competing blockchains has the ability to fundamentally affect the internal equilibrium of all such chains. for example, in a world with only one smart contract system, ethereum, internal incentives may lead to stable equilibria. with two players, and the underdog incentivized to launch a bribery attack to destroy their competitors, such equilibria can be disrupted, changed, and destroyed. a critical and surprisingly underexplored open area of research is modelling the macroeconomics of competition between blockchains, gaining insight into how exactly such internal equilibria can fail. we find it intuitively ~certain that critical black swan events are currently lurking in the complexity of blockchain governance and interoperability. obviously, these all require further exploration, tweaking, and proof.  but i think we have at least provided some intuition for why we believe the above to hold in a principled analysis framework. conclusion the trend of on-chain voting in blockchain is inspired by the long human tradition of voting and democracy.  unfortunately, safeguards available to us in the real world, such as enforced private/deniable voting, approximate identity controls, and attributability of widespread fraud are simply not available in the permissionless model.  when public keys generated by the users themselves are used, on-chain voting is not able to provide guarantees about these users having any anti-coercion guarantees. elaborate voting schemes do little to quell (and in many cases indeed aggravate) the problem.  on-chain voting schemes further complicate incentives, creating an unstable and tangled mess of incentives that can at any time be altered by trustless smart contract or dark dao-style vote buying, bribery, and griefing schemes. we encourage the community to be highly skeptical of the outcome of any on-chain vote, specifically as on-chain voting becomes an ever-important staple of decision making in blockchain systems.  the space for designing mechanisms that enable new forms of abuse with lower-than-ever coordination costs supports the position that votes should be used for signals not decisions, and that a wide variety of voting mechanisms should fill such roles.  without such safeguards, it remains possible that all on-chain voting systems degenerate into plutocracy through direct vote and participation buying and even vote tokenization. such attacks have substantial implications for the future security of all blockchain-based voting systems. acknowledgements we’d like to thank patrick mccorry for his helpful, thorough feedback throughout the lifecycle of this post, and pioneering work in vote buying and on-chain voting systems. we also thank omer shlomovits and istván andrás seres for their helpful comments on early access versions of this post. appendix a - on-chain vote differentiators we notice several distinguishing factors in on-chain voting systems: vote-changing ability: if users cannot change their vote, trivial vote buying is possible with any method that provides a cryptographically checkable receipt.  a smart contract can simply bribe users up-front for their vote, which can now never be changed. most schemes, however, allow users to change or withdraw their votes, meaning bribery needs some continuous time component (or to be done after a snapshot of the vote is taken).  exponentially increasing payouts over time provide an interesting solution that discourages coin movement and encourages long-term signaling, and payout bonuses at vote completion are tools potential vote-buyers can use to create viable vote buying schemes when users are allowed to change votes. smart contract/delegated voting:  who gets to vote for funds stored by smart contracts?  this is an open question that plagues existing designs; the original carbonvote allows any contract that can call a function to vote and later change its mind.  the eip vote allows contract deployers to vote on behalf of contracts, a decision widely criticized as being intended to sway vote outcomes.  however, neither design seems ideal. indeed, it seems intuitively difficult for a single design to capture all the custody nuances in smart contracts fairly: funds-holding smart contracts can range from simple multisignature accounts to complex decentralized organizations with their own revenue streams and inter-contract financial relationships.  which of these coins have voting rights, and how to fairly assign these rights remains an entirely unexplored philosophical requirement for building a fair on-chain voting system. forcing contract authors to provide explicit functionality is likely also insufficient, as the very requirements of this functionality can in the future change without backwards compatibility (through either chain voting or forks). deniability/provability: all of the schemes explored in this article have features which make them particularly amenable to vote buying: they provide the voter with some form of trust-minimizing cryptographic proof of their vote, either through an on-chain log, a secured web interface, or a smart contract’s state.   such schemes are particularly vulnerable to vote buying, as they make it easy for smart contract-style logic to validate votes. some traditional e-voting schemes in academic literature provide a property known as coercion resistance. in these schemes, a user is able to change their mind post-coercion using the key they use for voting, and votes are not attributable to individual users.  in general, the privacy concerns of having votes associated with any kind of long-standing identity, especially those holding coins, are severe. such concerns would be completely disqualifying for any serious voting systems in the real world, and probably should be disqualifying in all thoughtful on-chain voting design criteria. ← older newer → share on twitter share on facebook share on linkedin share on reddit share on e-mail please enable javascript to view the comments powered by disqus. comments powered by disqus phil daian phil daian is a first year ph.d. student at cornell university, who is interested in cryptocurrencies and smart contracts. more... tyler kell tyler kell is a research engineer at ic . more... ian miers ian miers is a postdoc at cornell tech, and a cryptographer working on anonymous systems. more... ari juels ari juels is a professor at the jacobs technion-cornell institute at cornell tech in nyc and co-director of ic . more... subscribe projects ava falcon teechan vaults bitcoin-ng recent posts archive by date attacking the defi ecosystem with flash loans for fun and profit libra: succinct zero-knowledge proofs with optimal prover computation liberating web data using deco, a privacy-preserving oracle protocol ostraka: blockchain scaling by node sharding on stablecoins and beauty pageants decentralize your secrets with churp the old fee market is broken, long live the new fee market one file for the price of three: catching cheating servers in decentralized storage networks on-chain vote buying and the rise of dark daos choose-your-own-security-disclosure-adventure more... popular introducing weaver how to disincentivize large bitcoin mining pools how a mining monopoly can attack bitcoin what did not happen at mt. gox bitcoin is broken stack ranking is not the cause of microsoft's problems how the snowden saga will end what's actually wrong with yahoo's purchase of summly broken by design: mongodb fault tolerance introducing virtual notary the principled documentation manifesto introducing hyperdex warp: acid transactions for nosql blog tags bitcoin / security / ethereum / hyperdex / release / nosql / selfish-mining / blocksize / dao / surveillance / privacy / mongo / broken / weaver / nsa / meta / leveldb / blockchain / % / voting / smart contracts / graph stores / decentralization / bitcoin-ng / vaults / snowden / satoshi / philosophy / mt. gox / mining pools copyright © - binance: italy, lithuania, hong kong, all issue warnings; brazil director quits – amy castor primary menu amy castor independent journalist about me selected clips contact me blog subscribe to blog via email enter your email address to subscribe to this blog and receive notifications of new posts by email. join , other followers email address: subscribe twitter updates fyi - i'm taking an actual vacation for the next week, so i'll be quiet on twitter and not following the news so mu… twitter.com/i/web/status/ …  day ago rt @davidgerard: news: the senate hates bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network…  day ago rt @franciskim_co: @wublockchain translation https://t.co/hpqefljhpu  day ago rt @wublockchain: the chinese government is cracking down on fraud. they posted a fraud case involving usdt on the wall to remind the publi…  day ago rt @patio : the core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us…  days ago recent comments cryptobuy on binance: fiat off-ramps keep c… steve on binance: a crypto exchange run… amy castor on el salvador’s bitcoin plan: ta… amy castor on el salvador’s bitcoin plan: ta… clearwell trader on el salvador’s bitcoin plan: ta… skip to content amy castor binance: italy, lithuania, hong kong, all issue warnings; brazil director quits ever since germany’s bafin and the uk’s fca issued warnings against binance, the dominoes have continued to topple. global regulators are fed up with the world’s biggest crypto exchange. this last week, three more jurisdictions issued warnings about binance’s tokenized stocks, joining several others in voicing their concerns about the exchange. in a press release on thursday, italy’s market watchdog consob warned investors that binance and its subsidiaries “are not authorized to provide investment services and activities in italy.” the notice specifically points to binance’s “stock token.”  lithuania’s central bank issued a warning on friday about binance uab, a binance affiliate, providing “unlicensed investment services.” “companies that are registered in lithuania as virtual currency exchange operators are not supervised as financial service providers. they also have no right to provide any financial services, including investment services,” the bank of lithuania said. also on friday, hong kong’s securities and futures commission announced that binance is not licensed to trade stock tokens in the territory.  in a statement, thomas atkinson, the sfc’s executive director of enforcement, had stern words for the exchange: “the sfc does not tolerate any violations of the securities laws and will not hesitate to take enforcement action against unlicensed platform operators where appropriate.” binance responded to the mounting pressure by announcing on its website that it would cease offering stock tokens. effective immediately, you can no longer buy stock tokens on binance, and the exchange will stop supporting them on october . as for the unlucky ones who are still holding binance stock tokens, you apparently have days to try and offload them onto someone else. the exchange also deleted mentions of stock tokens on its website. if you click on a link to “introduction to stock tokens” on the site, you get a “ error.” you can still visit the page here, however. a short-lived bad idea binance introduced its tokenized stocks idea on april , starting with tesla, followed by coinbase, and later microstrategy, microsoft and apple. (links are to archives on wayback machine.) “unlike traditional stocks, users can purchase fractional shares of the listed companies with stock tokens. for instance, for a tesla share that trades at over $ per share, stock tokens enable investors to buy a piece of the underlying share (e.g., . ) instead of the entire unit,” binance explained on its website. prices were settled in busd — a stablecoin binance created in partnership with paxos, a ny-based company. binance claims its stock tokens are fully backed by shares held by cm-equity ag, a regulated asset management firm in germany. the exchange also said friday that users in the eea and switzerland will be able to transition their stock token balances to cm-equity ag once the brokerage creates a special portal for that purpose, sometime in september or early october. however, the transition will require additional kyc. binance, whose modus operandi has always been to ignore the laws and do whatever, launched its stock token service two days before us crypto exchange coinbase went public on the nasdaq and bitcoin reached an all-time high of nearly $ , . the price of bitcoin is now less than half of that. in april, germany’s financial regulator bafin warned that binance risked being fined for offering its securities-tracking tokens without publishing an investor prospectus. binance went back and forth with bafin on the issue, trying to persuade them to take the notice down, according to the ft, but to no avail. the warning stayed up. in june, the uk followed with its own consumer warning, and then one by one, a host of other global regulators issued their own cautions about binance, and banks began cutting off services to the exchange — essentially a form of slow strangulation.   binance clearly wasn’t thinking when it introduced those stock tokens. the move appears to have been driven by the hubris of its ceo cz, who is now realizing that actions have repercussions. or maybe not, since his recent tweets and a blog post celebrating binance’s fourth birthday seem to reflect an ongoing detachment from reality. “together, we can increase the freedom of money for people around the world, in safe and compliant ways,” he wrote. by freedom, i assume he means, freedom to operate outside the law, or freedom to freeze withdrawals on his exchanges — a frequent user complaint, according to gizmodo. ftx and bittrex binance isn’t the only crypto exchange to offer stock tokens. sam bankman-fried’s ftx exchange also offers tokenized stocks (archive) — a service that it added in june. i suspect that a lot of binance’s business will flow over to ftx, and we’ll soon see similar regulatory crackdowns on ftx.  like binance, ftx has a us version of its exchange and a main site. ftx is registered in antigua and barbuda with headquarters in hong kong. it offers stock tokens for tesla, gamestock, beyond meat, paypal, twitter, google, amazon, and a host of others.  bittrex global — another exchange that has a regulated us-based arm — also offers an impressive array of stock tokens. the liechtenstein-based firm added the service in december , according to a press release at the time, noting that “these tokenized stocks are available even in countries where accessing us stocks through traditional financial instruments is not possible.”  ftx and bittrex also claim their stock tokens are backed by actual stocks held by cm-equity ag. binance brazil director resigns banks are not the only ones distancing themselves from binance these days. amidst the recent drama, ricardo da ros, binance’s director of brazil announced his departure on linkedin. he had only been with the company for six months.   “there was a misalignment of expectations about my role and i made the decision according to my personal values,” he said. other employees have also exited stage left in recent months. wei zhou, the chief finance officer at binance, quit abruptly in june, and catherine coley, the ceo of binance.us stepped down in may — though nobody has heard from her since. if you like my work, please support my writing. subscribe to my patreon account for as little as $ a month.  share this: twitter facebook linkedin like this: like loading... bafinbinancefca posted on july , july , by amy castor in blogging post navigation previous postbinance: fiat off-ramps keep closing, reports of frozen funds, what happened to catherine coley? next postnews: regulators zero in on stablecoins, el salvador’s colón-dollar, tether printer remains paused leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (address never made public) name website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. create a website or blog at wordpress.com %d bloggers like this: open source exile open source exile an open sourcer in exile tuesday, march #christchurchmosqueshootings this post is a personal reflection on the recent events in christchurch. many people have proposed different responses making some very good points. here are my thoughts: racism and bigotry has never been solved by wagging fingers at bigots. it has been solved by empowering the targets and systematically calling out minor acts of racism and bigotry so it becomes de-normalised. there have been lots of great suggestions as to how to empowering the targets in the last couple of days; listen to the targets on how they need to be empowered, not a white guy like me. enact a law that permanently raises the new zealand refugee quota automatically in response to anti-immigrant hate crimes (starting with the christchurch incident). this explicitly and clearly makes anti-immigrant hate crimes’ primary motivation self-defeating. doubling our quote also raises it in line with international norms. ban the commercial trading of firearms, moving their import to the not-for-profit sector (i.e. gun clubs) or to a personal activity. this removes the incentives behind the current gun city advertisements and tempers commercial incentives for importing guns. introduce a systematic buy-back program for weapons (guns, replica swords, etc). make owning a gun an inconvenience, doubly so in urban areas. this likely involves significantly tightening the licencing requirements (restricting types of guns, requiring advanced first aid and similar courses, etc) and random checks on licensees’ secure lockup measures, etc. it may also involve requiring licensees to report shooting trips, shooting range visits, etc, etc. done right, this may even have the side-effect of improving our conservation efforts by getting a better idea of who’s shooting what introduced and native animals gun range licenses should be managed in a similar way to alcohol licenses, with renewals, public notifications etc. update the rules around legal deposit so that when organisations and publishers selectively remove or update content from their websites they are required to notify the national library and that national library can broadcast this taken-down content. this attempts to preserve the public record by amplifying the streisand effect; efforts by public figures to sanitise their pasts without public apology need to be resisted. if we’re orchestrating large-scale take-downs of offensive new zealand content (such as videos of shooters shooting people) from the web, we need to reconcile this with certain statutory duties, such as the requirement that the national library collect and archive new zealand web content. collecting and archiving such offensive material may sound bizarre, but not doing so leaves us open to the kinds of revisionism that appears to fuel this kind of behaviour. if we’re going to continue to have religious education / schooling, it needs to address issues of religious hate rather than being a covert recruitment operation as it appears to be at the moment. we need to ask ourselves whether some of our brands (particularly sports brands) need to change their branding. the most effective way is probably the christchurch city council drafting a bylaw saying that local sports people and teams using it’s facilities must be named after animals with no negative connotations, with a limited year exception for existing teams to meet their contractual obligations. other councils would soon follow and giving a realistic time frame for renaming allows for planning around merchandising, team apparel and so forth. have an explicit fund for public actors (museums, galleries, libraries, academics, tohunga, imams, etc) to generate ‘content’ (everything from peer review papers to museum experiences, from school teaching resources to te ara articles, from poetry competitions to murals) on some of the deeper issues here. there’s a great need for young and old to engage with these issues, now and in the decades to come. find ways to amplify minority / oppressed voices. in theory blogs and social media were meant to be a way that we could find and the media pick up on theses voices in times like these, but across many media outlets this is manifestly not happening. we’re seeing straight white males write that new zealand has no discrimination problems and editors sending those pieces to print. we’re seeing ‘but he was such a nice young man’ stories. it’s no coincidence that the media outlets and pundits that are doing this are largely the same ones who have previously be accused of racism. we need to find ways to fix this, if necessary leveraging advertisers and/or adding conditions to spectrum licenses. we need to seriously reflect on whether an apology is needed in relation to the new zealand police raids, which now stand in a new light. the law of unintended consequences means that there will be side effects. the most obvious two from this list may be increased barriers to recreational gun clubs (including olympic pistol shooting, which is pretty hard to argue isn’t a genuine sport, but which has never really been all that big in new zealand) and the decreased amateur shooting of pest species (deer, pig, etc) on public conservation land (which is a more serious issue). posted by stuart yeates at : no comments: monday, october how would we know when it was time to move from tei/xml to tei/json? this post inspired by tei next by hugh cayless. how would we know when it was time to move from tei/xml to tei/json? if we stand back and think about what it is we (the tei community) need from the format : a common format for storing and communicating texts and augmentations of texts (transcriptions, manuscript description, critical apparatus, authority control, etc, etc.). a body of documentation for shared use and understanding of that format. a method of validating texts in the format as being in the format. a method of transforming texts in the format for computation, display or migration. the ability to reuse the work of other communities so we don't have to build everything for ourselves (unicode, ietf language tags, uris, parsers, validators, outsourcing providers who are tooled up to at least have a conversation about what we're trying to do, etc) [everyone will have their slightly different priorities for a list like this, but i'm sure we can agree that a list of important functionality could be drawn up and expanded to requirements list at a sufficiently granular level so we can assess different potential technologies against those items. ]  if we really want to ponder whether tei/json is the next step after tei/xml we need to compare the two approaches against such as list of requirements. personally i'm confident that tei/xml will come out in front right now. whether javascript has potential to replace xslt as the preferred method for really exciting interfaces to tei/xml docs is a much more open question, in my mind.   that's not to say that the criticisms of xml aren't true (they are) or valid (they are) or worth repeating (they are), but perfection is commonly the enemy of progress. posted by stuart yeates at : comments: sunday, october whither tei? the next thirty years this post is a direct response to some of the organisational issues raised in https://scalablereading.northwestern.edu/?p= i completely agree that we need to significantly broaden the base of the tei. a x campaign is a great idea, but better is a , x goal, or a , x goal. if we can reduce the cost to the normal range of a hardback text, most libraries will have delegated signing authority to individuals in acquisitions and only one person will need to be convinced, rather than a chain of people. but how could we scale , institutions? to scale like that, we to think (a) in terms of scale and (b) in terms of how to make it easy for members to be a part of us. scale ( ) a recent excellent innovation in the the tei community has been the appointment of a social media coordinator. this is a great thing and i’ve certainly learnt about happenings i would not have otherwise been exposed to. but by nature the concept of ‘a social media coordinator’ can’t scale (one person in one time zone with one set of priorities...). if we look at what mature large-scale open projects do for social media (debian, wikimedia, etc), planets are almost always part of the solution. a planet for tei might include (in no particular): x blog feeds from tei-specific projects x blog feeds from tei-using projects (limited to those posts tagged tei) x rss feed for changes to the tei wiki (limited to one / day each) x rss feed for jenkins server (limited to successful build only; limited to one / day each; tweaked to include full context and links) x rss feeds for github repositories not covered by jenkins server (limited to one / day each) x rss feeds for other sundry repositories (limited to one / day each) x blog feeds from tei-people (limited to those posts tagged tei) x rss feeds from tei-people’s zotero bibliographic databases (limited to those bibs tagged tei; limited to one / day each) x rss feed for official tei news x rss feed of edits for the tei article on each language wikipedia (limited to one / day each) x rss feed of announcements from the jtei x rss feed of new papers in the jtei … the diversity of the planet would be incredible compared to current views of the tei community and it’s all generated as a byproduct of what people are already doing. there might be some pressure to improve commit messages in some repos, but that might not be all bad. of course the whole planet is available as an rss feed and there are rss-to-facebook (and twitter, yammer, etc) converters if you wish to do tei in your favourite social media. if the need for a curated facebook feed remains, there is now a diverse constant feed of items to select within. this is a social media approach at scale. scale ( ) there is an annual international conference which is great to attend. there is a perception that engagement in the tei community requires attendance at the said conference. it’s a huge barrier to entry to small projects, particularly those in far-away places (think global south / developing world / etc). the tei community should seriously consider a policy for decision making that explicitly removes assumptions about attendances. something as simple as requiring draft papers intended for submission and agendas to be published and days in advance of meetings and a notice to be posted to tei-l. that would allow for thoughtful global input, scaling community from those who can attend an annual international conference to a wider group of people who care about the tei and have time to contribute. make it easy ( ) libraries (at least the library i work in and libraries i talk to) buy resources based on suggestions and lobbying by faculty but renew resources based largely on usage. if we want , libraries to have tei on automatic renewal we need usage statistics. the players in the field are sushi and counter (sushi is a harvesting system for counter). maybe the tei offers members stats at diverse tei-using sites. it’s not clear to me without deep investigation whether the tei could offer these stats to members at very little on-going cost to us, but it would be a member benefit that all acquisitions librarians, their supervisors and their auditors could understand and use to evaluate their tei membership subscription. i believe that that comparison would be favourable. of course, the tei-using sites generating the traffic are going to want at least some cut of the subs, even if it’s just a discount against their own membership (thus driving the number of participating sites up and the perceived member benefits up) and free support for the stats-generating infrastructure. for the sake of clarity: i’m not suggesting charging for access to content, i’m suggesting charging institutions for access to statistics related to access to the content by their users. make it easy ( ) academics using computers for research, whether or not they think or call the field digital humanities face a relatively large number of policies and rules imposed by their institutions, funders and governments. the tei community can / should be selling itself as he approach to meet these. copyright issues? have some corpora that are available under a cc license. need to prove academic outputs are archivable? here’s the pronom entry (note: i’m currently working on this) management doesn’t think the department as the depth of tei experience to enroll phds in tei-centric work? here’s a map of global tei people to help you find local backups in case staff move on. looking for a tei consultant? a different facet of the same map gives you what you need. you’re a random academic who knows nothing about the tei but assigned a tei-centric paper as part of a national research assessment exercise? here’s an outline of tei’s academic credentials. .... make it easy ( ) librarians love quality marc / marcxml records. many of us have quality marc / marcxml records for our tei-based web content. might this be offered as a member benefit? make it easy ( ) as far as i can tell the tei community makes very little attempt to reach out to academic communities other than ‘literature departments and cognate humanities disciplines’ attracting a more diverse range of skills and academics will increase our community in depth and breadth. outreach could be: something like css zen garden http://www.csszengarden.com/ only backed by tei rather than html a list of ‘hard problems’ that we face that various divergent disciplines might want to set as second or third year projects. each problem would have a brief description of the problem, pointers to things like: transformation for display for documents have five foot levels of footnotes, multiple obscure scripts, non-unicode characters, and so forth. schema / odd auto-generation from a corpus of documents ... engaging with a group like http://software-carpentry.org/ to ubiquify tei training .. end note i'm not advocating that any particular approach is the cure-all for everything that might be ailing the tei community, but the current status-quo is increasingly seeming like benign neglect. we need to change the way we think about tei as a community. posted by stuart yeates at : no comments: tuesday, october thoughts on the ndfnz wikipedia panel last week i was on an ndfnz wikipedia panel with courtney johnston, sara barham and mike dickison. having reflected a little and watched the youtube at https://www.youtube.com/watch?v= b x sqo ua i've got some comments to make (or to repeat, as the case may be). many people, including apparently including courtney, seemed to get the most enjoyment out of writing the ‘body text’ of articles. this is fine, because the body text (the core textual content of the article) is the core of what the encyclopaedia is about. if you can’t be bothered with wikiprojects, categories, infoboxes, common names and wikidata, you’re not alone and there’s no reason you need to delve into them to any extent. if you start an article with body text and references that’s fine; other people will to a greater or less extent do that work for you over time. if you’re starting a non-trivial number of similar articles, get yourself a prototype which does most of the stuff for you (i still use https://en.wikipedia.org/wiki/user:stuartyeates/sandbox/academicbio which i wrote for doing new zealand women academics). if you need a prototype like this, feel free to ask me. if you have a list of things (people, public art works, exhibitions) in some machine readable format (excel, csv, etc) it’s pretty straightforward to turn them into a table like https://en.wikipedia.org/wiki/wikipedia:wikiproject_new_zealand/requested_articles/craft#proposed_artists or https://en.wikipedia.org/wiki/enjoy_public_art_gallery send me your data and what kind of direction you want to take it. if you have a random thing that you think needs a wikipedia article, add to https://en.wikipedia.org/wiki/wikipedia:wikiproject_new_zealand/requested_articles  if you have a hundred things that you think need articles, start a subpage, a la https://en.wikipedia.org/wiki/wikipedia:wikiproject_new_zealand/requested_articles/craft and https://en.wikipedia.org/wiki/wikipedia:wikiproject_new_zealand/requested_articles/new_zealand_academic_biographies both completed projects of mine. sara mentioned that they were thinking of getting subject matter experts to contribute to relevant wikipedia articles. in theory this is a great idea and some famous subject matter experts contributed to britannica, so this is well-established ground. however, there have been some recent wikipedia failures particularly in the sciences. people used to ground-breaking writing may have difficulty switching to a genre where no original ideas are permitted and everything needs to be balanced and referenced. preparing for the event, i created a list of things the awesome dowse team could do as follow-ups to they craft artists work, but we never got to that in the session, so i've listed them here: [[list of public art in lower hutt]] since public art is out of copyright, someone could spend a couple of weeks taking photos of all the public art and creating a table with clickable thumbnail, name, artist, date, notes and gps coordinates. could probably steal some logic from somewhere to make the table convertible to a set of points inside a gps for a tour. publish from their archives a complete list of every exhibition ever held at the dowse since founding. each exhibition is a shout-out to the artists involved and the list can be used to check for potentially missing wikipedia articles. digitise and release photos taken at exhibition openings, capturing the people, fashion and feeling of those era. the hard part of this, of course, is labelling the people. reach out to their broader community to use the dowse blog to publish community-written obituaries and similar content (i.e. encourage the generation of quality secondary sources). engage with your local artists and politicians by taking pictures at dowse events, uploading them to commons and adding them to the subjects’ wikipedia articles—have attending a dowse exhibition opening being the easiest way for locals to get a new wikipedia image. i've not listed the 'digitise the collections' option, since at the end of the day, the value of this (to wikipedia) declines over time (because there are more and more alternative sources) and the price of putting them online declines. i'd much rather people tried new innovative things when they had the agility and leadership that lets them do it, because that's how the community as a whole moves forward. posted by stuart yeates at : no comments: labels: wikipedia thursday, october feedback on nlnz ‘digitalnz concepts api‘ this blog post is feedback on a recent blog post ‘introducing the digitalnz concepts api’ http://digitalnz.org/blog/posts/introducing-the-digitalnz-concepts-api by the national library of new zealand’s digitalnz team. some of the feedback also rests on conversations i've had with various nlnz staffers and other interested parties and a great stack of my own prejudices. i've not actually generated an api key and run the thing, since i'm currently on parental leave. parts of the concepts api look very much like authority control, but authority control is not mentioned in the blog post or the docs that i can find. it may be that there are good reasons for this (such as parallel comms in the pipeline for the authority control community) but there are also potentially very worrying reasons. clarity is needed here when the system goes live. all the urls in examples are http, but the ala’s freedom to read statement requires all practical measures be taken to ensure the confidentiality of the reader’s searching and reading. thus, if the api is to be used for real-time searching, https urls must be an option.  there is insufficient detail of of the identifiers in use. if i'm building a system to interoperate with the concepts api, which identifiers should i be keeping at my end to identify things that the digitalnz end? the clearer this definition is, the more robust this interoperability is likely to be, there’s a very good reason for the highly structured formats of identifiers such as isni and isbn. if nothing else a regexp would be very useful. personally i’d recommend browsing around http://id.loc.gov/ a little and rethinking the url structure too. there needs to be an insanely clear statement on the exact relationship between digitalnz concepts and those authority control systems mapped into viaf. both digitalnz concepts and viaf are semi-automated authority matching systems and if we’re not carefully they’ll end up polluting each other (as for example, dnb already has with gender data).  deep interoperability is going to require large-scale matching of digitalnz concepts with things in a wide variety of glam collections and incorporating identifiers into those collections’ metadata. that doesn't appear possible with the current licensing arrangements. maybe a flat-file dump (csv or json) of all the concepts under a cc license? urls to rights-obsessed partners could be excluded. if non-techies are to understand concepts, http://api.digitalnz.org/concepts/ is going to have to provide human-comprehensible content without an api key (i’m guessing that this is going to happen when it comes out of beta?) mistakes happen (see https://en.wikipedia.org/wiki/wikipedia:viaf/errors for recently found errors in viaf, for example). there needs to be a clear contact point and likely timescale for getting errors fixed.  having said all that, it looks great! posted by stuart yeates at : comments: monday, july bibframe adrian pohl ‏wrote some excellent thoughts about the current state of bibframe at http://www.uebertext.org/ / /name-authority-files-linked-data.html the following started as a direct response but, after limiting myself to where i felt i knew what i was talking about and felt i was being constructive, turned out to be much much narrower in scope. my primary concern in relation to bibframe is interlinking and in particular authority control. my concern is that a number of the players (bibframe, isni, gnd, orcid, wikipedia, etc) define key concepts differently and that without careful consideration and planning we will end up muddying our data with bad mappings. the key concepts in question are those for persons, names, identities, sex and gender (there may be others that i’m not aware of). let me give you an example. in the th century there was a mass creation of male pseudonyms to allow women to publish novels. a very few of these rose to such prominence that the authors outed themselves as women (think currer bell), but the overwhelming majority didn’t. in the late th and early st centuries, entries for the books published were created in computerised catalogue systems and some entries found their way into the gnd. my understanding is that the gnd assigned gender to entries based entirely on the name of the pseudonym (i’ll admit i don’t have a good source for that statement, it may be largely parable). when a new public-edited encyclopedia based on reliable sources called wikipedia arose, the gnd was very successfully cross-linked with wikipedia, with hundreds of thousands of articles were linked to the catalogues of their works. information that was in the gnd was sucked into a portion of wikipedia called wikidata. a problem now arose: there were no reliable sources for the sex information in gnd that had been sucked wikidata by gnd, the main part of wikipedia (which requires strict sources) blocked itself from showing wikidata sex information. a secondary problem was that the gnd sex data was in iso format (male/female/unknown/not applicable) whereas wikipedia talks not about sex but gender and is more than happy for that to include fa'afafine and similar concepts. fortunately, wikidata keeps track of where assertions come from, so the sex info can, in theory, be removed; but while people in wikipedia care passionately about this, no one on the wikidata side of the fence seems to understand what the problem is. stalemate. there were two separate issues here: a mismatch between the person in wikipedia and the pseudonym (i think) in gnd; and a mismatch between a cataloguer-assigned iso value and a free-form self-identified value.  the deeper the interactions between our respective authority control systems become, the more these issues are going to come up, but we need them to come up at the planning and strategy stages of our work, rather than halfway through (or worse, once we think we’ve finished). my proposed solution to this is examples: pick a small number of ‘hard cases’ and map them between as many pairs of these systems as possible. the hard cases should include at least: charlotte brontë (or similar); a contemporary author who has transitioned between genders and published broadly similar work under both identities; a contemporary author who publishes in different genre using different identities; ... the cases should be accompanied by instructions for dealing with existing mistakes found (and errors will be found, see https://en.wikipedia.org/wiki/wikipedia:viaf/errors for some of the errors recently found during he wikipedia/viaf matching). if such an effort gets off the ground, i'll put my hand up to do the wikipedia component (as distinct from the wikidata component). posted by stuart yeates at : comments: labels: bibframe, gnd, linked data, viaf, wikipedia wednesday, june a wikipedia strategy for the royal society of new zealand over the last hours i’ve had a very unsatisfactory conversation with the individual(s) behind the @royalsocietynz twitter account regarding wikipedia. rather than talk about what went wrong, i’d like to suggest a simple strategy that builds the society’s causes in the long term. first up, our resources: we have three wikipedia pages strongly related the society, royal society of new zealand, rutherford medal (royal society of new zealand) and hector memorial medal; we have a twitter account that appears to be widely followed; we have some employee of rsnz with no apparent wikipedia skills wanting to use wikipedia to advance the public-facing causes of the society, which are: “to foster in the new zealand community a culture that supports science, technology, and the humanities, including (without limitation)—the promotion of public awareness, knowledge, and understanding of science, technology, and the humanities; and the advancement of science and technology education: to encourage, promote, and recognise excellence in science, technology, and the humanities” the first thing to notice is that promoting the society is not a cause of the society, so no effort should be expending polishing the royal society of new zealand article (which would also breach wikipedia’s conflict of interest guidelines). the second thing to notice is that the two medal pages contain long lists of recipients, people whose contributions to science and the humanities in new zealand are widely recognised by the society itself. this, to me, suggests a strategy: leverage @royalsocietynz’s followers to improve the coverage of new zealand science and humanities on wikipedia: once a week for a month or two, @royalsocietynz tweets about a medal recipient with a link to their wikipedia biography. in the initial phase recipients are picked with reasonably comprehensive wikipedia pages (possibly taking steps to improve the gender and racial demographic of those covered to meet inclusion targets). by the end of this part followers of @royalsocietynz have been exposed to wikipedia biographies of new zealand people. in the second part, @royalsocietynz still tweets links to the wikipedia pages of recipients, but picks ‘stubs’ (wikipedia pages with little or almost no actual content). tweets could look like ‘hector medal recipient xxx’s biography is looking bare. anyone have secondary sources on them?’ in this part followers of @royalsocietynz are exposed to wikipedia biographies and the fact that secondary sources are needed to improve them. hopefully a proportion of @royalsocietynz’s followers have access to the secondary sources and enough crowdsourcing / generic computer confidence to jump in and improve the article. in the third part, @royalsocietynz picks recipients who don’t yet have a wikipedia biography at all. rather than linking to wikipedia, @royalsocietynz links to an obituary or other biography (ideally two or three) to get us started. in the fourth part @royalsocietynz finds other new zealand related lists and get the by-now highly trained editors to work through them in the same fashion. this strategy has a number of pitfalls for the unwary, including: wikipedia biographies of living people (blps) are strictly policed (primarily due to libel laws); the solution is to try new and experimental things out on the biographies of people who are safely dead. copyright laws prevent cut and pasting content into wikipedia; the solution is to encourage people to rewrite material from a source into an encyclopedic style instead. recentism is a serious flaw in wikipedia (if the society is years old, each of those decades should be approximately equally represented; coverage of recent political machinations or triumphs should not outweigh entire decades); the solution is to identify sources for pre-digital events and promote their use. systematic bias is an on-going problem in wikipedia, just as it is elsewhere; a solution in this case might be to set goals for coverage of women, māori and/or non-science academics; another solution might be for the society to trawl it's records and archives lists of  minorities to publish digitally. everything on wikipedia needs to be based on significant coverage in reliable sources that are independent of the subject; the solution is to start with the sources first. conflict of interest statement: i’m a high-active editor on wikipedia and am a significant contributor to all many of the wikipedia articles linked to from this post. posted by stuart yeates at : no comments: friday, december prep notes for ndf demonstration i didn't really have a presentation for my demonstration at the ndf, but the event team have asked for presentations, so here are the notes for my practice demonstration that i did within the library. the notes served as an advert to attract punters to the demo; as a conversation starter in the actual demo and as a set of bookmarks of the urls i wanted to open. depending on what people are interested in, i'll be doing three things *) demonstrating basic editing, perhaps by creating a page from the requested articles at http://en.wikipedia.org/wiki/wikipedia:wikiproject_new_zealand/requested_articles *) discussing some of the quality control processes i've been involved with (http://en.wikipedia.org/wiki/wikipedia:articles_for_deletion and http://en.wikipedia.org/wiki/new_pages_patrol) *) discussing how wikipedia handles authority control issues using redirects (https://secure.wikimedia.org/wikipedia/en/wiki/wikipedia:redirect ) and disambiguation (https://secure.wikimedia.org/wikipedia/en/wiki/wikipedia:disambiguation ) i'm also open to suggestions of other things to talk about. posted by stuart yeates at : no comments: labels: ndf, wikipedia thursday, december metadata vocabularies lodlam nz cares about at today's lodlam nz, in wellington, i co-hosted a vocabulary schema / interoperability session. i kicked off the session with a list of the metadata schema we care about and counts of how many people in the room cared about it. here are the results: library of congress / naco name authority list māori subject headings library of congress subject headings sonz linnean getty thesauri marsden research subject codes / anzrsc codes scot iwi hapū list australian pictorial thesaurus powerhouse object names thesaurus mesh this straw poll naturally only reflects on the participants who attended this particular session and counting was somewhat haphazard (people were still coming into the room), but is gives a sample of the scope. i don't recall whether the heading was "metadata we care about" or "vocabularies we care about," but it was something very close to that. posted by stuart yeates at : comments: wednesday, november unexpected advice during the ndf today i was in "digital initiatives in māori communities" put on the the talented honiana love and claire hall from the te reo o taranaki charitable trust about their work on he kete kōrero. at the end i asked a question "most of us [the audience] are in institutions with te reo māori holdings or cultural objects of some description. what small thing can we do to help enable our collections for the iwi and hapū source communities? use māori subject headings? the iwi / hapū list? geotagging? ..." quick-as-a-blink the response was "geotagging." if i understood the answer (given mainly by honiana) correctly, the point was that geotagging is much more useful because it's much more likely to be done right in contexts like this. presumably because geotagging lends itself to checking, validation and visualisations that make errors easy to spot in ways that these other metadata forms don't; it's better understood by those processing the documents and processing the data. i think it's fabulous that we're getting feedback from indigenous groups using information systems in indigenous contexts, particularly feedback about previous attempts to cater to their needs. if this is the experience of other indigenous groups, it's really important. posted by stuart yeates at : no comments: labels: māori, metadata, ndf saturday, november goodbye 'social-media' world you may or may not have noticed, but recently a number of 'social media' services have begun looking and working very similarly. facebook is the poster-child, followed by google+ and twitter. their modus operandi is to entice you to interact with family-members, friends and acquaintances and then leverage your interactions to both sell your attention advertisers and entice other members of you social circle to join the service. there are, naturally, a number of shiny baubles you get for participating it the sale of your eyeballs to the highest bidder, but recently i have come to the conclusion that my eyeballs (and those of my friends, loved ones and colleagues) are worth more. i'll be signing off google plus, twitter and facebook shortly. i my return for particular events, particularly those with a critical mass the size of jupiter, but i shall not be using them regularly. i remain serenely confident that all babies born in my extended circle are cute, i do not need to see their pictures. i will continue using other social media as before (email, wikipedia, irc, skype, etc) as usual. my deepest apologies to those who joined at least party on my account. posted by stuart yeates at : no comments: labels: facebook, social network, twitter sunday, november recreational authority control over the last week or two i've been having a bit of a play with ngā Ūpoko tukutuku / the māori subject headings (for the uninitiated, think of the widely used library of congress subject headings, done post-colonial and bi-lingually but in the same technology) the main thing i've been doing is trying to munge the msh into wikipedia (wikipedia being my addiction du jour). my thinking has been to increase the use of msh by taking it, as it were, to where the people are. i've been working with the english language wikipedia, since the māori language wikipedia has fewer pages and sees much less use. my first step was to download the msh in marc xml format (available from the website) and use xsl to transform it into a wikipedia table (warning: large page). when looking at that table, each row is a subject heading, with the first column being the the te reo māori term, the second being permutations of the related terms and the third being the scope notes. i started a discussion about my thoughts (warning: large page) and got a clear green light to create redirects (or 'related terms' in librarian speak) for msh terms which are culturally-specific to māori culture. i'm about % of the way through the terms of the msh and have redirects in the newly created category:redirects from māori language terms. that may sound pretty average, until you remember that institutions are increasingly rolling out tools such as summon, which use wikipedia redirects for auto-completion, taking these mappings to the heart of most māori speakers in higher and further education. i don't have a time-frame for the redirects to appear, but they haven't appeared in otago's summon, whereas redirects i created ~ two years ago have; type 'jack yeates' and pause to see it at work. posted by stuart yeates at : no comments: tuesday, august thoughts on "letter about the tei" from martin mueller thoughts on "letter about the tei" from martin mueller note: i am a member of the tei council, but this message is should be read as personal position at the time of writing, not a council position, nor the position of my employer. reading martin's missive was painful. i should have responded earlier, i think perhaps i was hoping someone else could say what i wanted to say and i could just say "me too." they haven't so i've become the someone else. i don't think that martin's "fairly radical model" is nearly radical enough. i'd like to propose a significantly more radical model as strawman: ) the tei shall maintain a document called the 'the tei principals.' the purpose of the tei is to advance the tei principals. ) institutional membership of the tei is open to groups which publish, collect and/or curate documents in formats released by the tei. institutional membership requires members acknowledge the tei principals and permits the members to be listed at http://www.tei-c.org/activities/projects/ and use the tei logos and branding. ) individual membership of the tei is open to individuals; individual membership requires members acknowledge the tei principals and subscribe to the tei mailing list at http://listserv.brown.edu/?a =tei-l. ) all business of the tei is conducted in public. business which needs be conducted in private (for example employment matters, contract negotiation, etc) shall be considered out of scope for the tei. ) changes to the structure of the tei will be discussed on the tei mailing list and put to a democratic vote with a voting period of at least one month, a two-thirds majority of votes cast is required to pass a motion, which shall be in english. ) groups of members may form for activities from time-to-time, such as members meetings, summer schools, promotions of the tei or collective digitisation efforts, but these groups are not the tei, even if the word 'tei' appears as part of their name. i'll admit that there are a couple of issues not covered here (such as who holds the ipr), but it's only a straw man for discussion. feel free to fire it as necessary. posted by stuart yeates at : comment: thursday, june unit testing framework for xsl transformations? i'm part of the tei community, which maintains an xml standard which is commonly transformed to html for presentation (more rarely pdf). the tei standard is relatively large but relatively well documented, the transformation to html has thus far been largely piecemeal (from a software engineering point of view) and not error free. recently we've come under pressure to introduce significantly more complexity into transformations, both to produce epub (which is wrapped html bundled with media and metadata files) and html (which can represent more of the formal semantics in tei). the software engineer in me sees unit testing the a way to reduce our errors while opening development up to a larger more diverse group of people with a larger more diverse set of features they want to see implemented. the problem is, that i can't seem to find a decent unit testing framework for xslt. does anyone know of one? our requirements are: xslt . ; free to use; runnable on our ubuntu build server; testing the transformation with multiple arguments; etc; we're already using: xsd, rng, dtd and schematron schemas, epubcheck, xmllint, standard html validators, etc. having the framework drive these too would be useful. the kinds of things we want to test include: footnotes appear once and only once footnotes are referenced in the text and there's a back link from the footnote to the appropriate point in the text internal references (tables of contents, indexes, etc) point somewhere language encoding used xml:lang survives from the tei to the html that all the paragraphs in the tei appear at least once in the html that local links work sanity check tables internal links within parallel texts .... any of many languages could be used to represent these tests, but ideally it should have a dom library and be able to run that library across entire directories of files. most of our community speak xml fluently, so leveraging that would be good. posted by stuart yeates at : no comments: wednesday, march is there a place for readers' collectives in the bright new world of ebooks? the transition costs of migrating from the world of books-as-physical-artefacts-of-pulped-tree to the world of books-as-bitstreams are going to be non-trivial. current attempts to drive the change (and by implication apportion those costs to other parties) have largely been driven by publishers, distributors and resellers of physical books in combination with the e-commerce and electronics industries which make and market the physical ebook readers on which ebooks are largely read. the e-commerce and electronics industries appear to see traditional publishing as an industry full of lumbering giants unable to compete with the rapid pace of change in the electronics industry and the associated turbulence in business models, and have moved to poach market-share. by-and-large they've been very successful. amazon and apple have shipped millions of devices billed as 'ebook readers' and pretty much all best-selling books are available on one platform or another. this top tier, however, is the easy stuff. it's not surprising that money can be made from the latest bodice-ripping page-turner, but most of the interesting reading and the majority of the units sold are outside the best-seller list, on the so-called 'long tail.' there's a whole range of books that i'm interested in that don't appear to be on the business plan of any of the current ebook publishers, and i'll miss them if they're not converted: the back catalogue of local poetry. almost nothing ever gets reprinted, even if the original has a tiny print run and the author goes on to have a wonderfully successful career. some gets anthologised and a few authors are big enough to have a posthumous collected works, when their work is no longer cutting edge. some fabulous theses. i'm thinking of things like: http://ir.canterbury.ac.nz/handle/ / , http://victoria.lconz.ac.nz/vwebv/holdingsinfo?bibid= and http://otago.lconz.ac.nz/vwebv/holdingsinfo?bibid= lots of te reo māori material (pick your local indigenous language if you're reading this outside new zealand) local writing by local authors. note that all of these are local content---no foreign mega-corporation is going to regard this as their home-turf. getting these documents from the old world to the new is going to require a local program run by (read funded by) locals. would you pay for these things? i would, if it gave me what i wanted. what is it that readers want? we're all readers, of one kind or another, and we all want a different range of things, but i believe that what readers want / expect out of the digital transition is: to genuinely own books. not to own them until they drop their ereader in the bath and lose everything. not to own them until a company they've never heard of goes bust and turns off a drm server they've never heard of. not to own them until technology moves on and some new format is in use. to own them in a manner which enables them to use them for at least their entire lifetime. to own them in a manner that poses at least a question for their heirs. a choice of quality books. quality in the broadest sense of the word. choice in the broadest sense of the word. universality is a pipe-dream, of course, but with releasing good books faster than i can read them. a quality recommendation service. we all have trusted sources of information about books: friends, acquaintances, librarians or reviewers that history have suggested have similar ideas as us about what a good read is. to get some credit for already having bought the book in pulp-of-murdered-tree work. lots of us have collections of wood-pulp and like to maintain the illusion that in some way that makes us well read. books bought to their attention based on whether they're worth reading, rather than what publishers have excess stock of. since the concept of 'stock' largely vanishes with the transition from print to digital this shouldn't be too much of a problem. confidentially for their reading habits. if you've never come across it, go and read the ala's the freedom to read statement a not-for-profit readers' collective it seems to me that the way to manage the transition from the old world to the new is as a not-for-profit readers' collective. by that i mean a subscription-funded system in which readers sign up for a range of works every year. the works are digitised by the collective (the expensive step, paid for up-front), distributed to the subscribers in open file formats such as epub (very cheap via the internet) and kept in escrow for them (a tiny but perpetual cost, more on this later). authors, of course, need to pay their mortgage, and part of the digitisation would be obtaining the rights to the work. authors of new work would be paid a 'reasonable' sum, based on their statue as authors (i have no idea what the current remuneration of authors is like, so i won't be specific). the collective would acquire (non-exclusive) the rights to digitise the work if not born digital, to edit it, distribute it to collective members and to sell it to non-members internationally (i.e. distribute it through 'conventional' digital book channels). in the case of sale to non-members through conventional digital book channels the author would get a cut. sane and mutually beneficial deals could be worked out with libraries of various sizes. generally speaking, i'd anticipate the rights to digitise and distribute in-copyright but out-of-print poetry would would be fairly cheap; the rights to fabulous old university theses cheaper; and rights to out-of-copyright materials are, of course, free. the cost of rights to new novels / poetry would hugely depend on statue of the author and the quality of the work, which is where the collective would need to either employ a professional editor to make these calls or vote based on sample chapters / poems or some combination of the two. costs of quality digitisation is non-trivial, but costs are much lower in bulk and dropping all the time. depending on the platform in use, members of the collective might be recruited as proof-readers for ocr errors. that leaves the question of how to fund the the escrow. the escrow system stores copies of all the books the collective has digitised for the future use of the collectives' members and is required to give efficacy to the promise that readers really own the books. by being held in escrow, the copies survive the collective going bankrupt, being wound up, or evolving into something completely different, but requires funding. the simplest method of obtaining funding would be to align the collective with another established consumer of local literature and have them underwrite the escrow, a university, major library, or similar. the difference between a not-for-profit readers' collective and an academic press? of hundreds of years, major universities have had academic presses which publish quality content under the universities' auspices. the key difference between the not-for-profit readers' collective i am proposing and an academic press is that the collective would attempt to publish the unpublished and out-of-print books that the members wanted rather than aiming to meet some quality criterion. i acknowledge a popularist bias here, but it's the members who are paying the subscriptions. which links in the book chain do we want to cut out? there are some links in the current book production chain which we need to keep, there are others wouldn't have a serious future in a not-for-profit. certainly there is a role for judgement in which works to purchase with the collective's money. there is a role for editing, both large-scale and copy-editing. there is a role for illustrating works, be it cover images or icons. i don't believe there is a future for roles directly relating to the production, distribution, accounting for, sale, warehousing or pulping of physical books. there may be a role for the marketing books, depending on the business model (i'd like to think that most of the current marketing expense can be replaced by combination of author-driven promotion and word-of-month promotion, but i've been known to dream). clearly there is an evolving techie role too. the role not mentioned above that i'd must like to see cut, of course, is that of the multinational corporation as gatekeeper, holding all the copyrights and clipping tickets (and wings). posted by stuart yeates at : comments: saturday, november howto: deep linking into the nzetc site as the heaving mass of activity that is the mixandmash competition heats up, i have come to realise that i should have better documented a feature of the nzetc site, the ability to extract the tei xml annotated with the ids for deep linking. our content's archival form is tei xml, which we massage for various output formats. there is a link from the top level of every document to the tei for the document, which people are welcome to use in their mashups and remixes. unfortunately, between that tei and our html output is a deep magic that involves moving footnotes, moving page breaks, breaking pages into nicely browsable chunks, floating marginal notes, etc., and this makes it hard to deep link back to the website from anything derived from that tei. there is another form of the tei available which is annotated with whether or not each structural element maps : to an html: nzetc:has-text and what the id of that page is: nzetc:id this annotated xml is found by replacing the 'tei-source' in the url with 'etexts' thus for the laws of england, compiled and translated into the māori language at http://www.nzetc.org/tm/scholarly/tei-gorlaws.html there is the raw tei at http://www.nzetc.org/tei-source/gorlaws.xml and the annotated tei at http://www.nzetc.org/etexts/gorlaws.xml looking in the annotated tei at http://www.nzetc.org/etexts/gorlaws.xml we see for example:
this means that this div has it's own page (because it has nzetc:has-text="true" and that the id of that page is tei-gorlaws-t -g -t -front -tp (because of the nzetc:id="tei-gorlaws-t -g -t -front -tp "). the id can be plugged into: http://www.nzetc.org/tm/scholarly/.html to get a url for the html. thus the url for this div is http://www.nzetc.org/tm/scholarly/tei-gorlaws-t -g -t -front -tp .html this process should work for both text and figures. happy remixing everyone! posted by stuart yeates at : comment: sunday, november epubs and quality you may have heard news about the release of "bookserver" by the good folks at the internet archive. this is a drm-free epub ecosystem, initially stocked with the prodigious output of google's book scanning project and the internet archive's own book scanning project. to see how the nzetc stacked up against the much larger (and better funded) collection i picked one of our maori language dictionaries. our maori and pacifica dictionaries month-after-month make up the bulk of our top five must used resources, so they're in-demand resources. they're also an appropriate choice because when they were encoded by the nzetc into tei, the decision was made not to use full dictionary encoding, but a cheaper/easier tradeoff which didn't capture the linguistic semantics of the underlying entries, but treated them as typeset text. i was interested in how well this tradeoff was wearing. i did my comparison using the new firefox epub plugin, things will be slightly different if you're reading these epubs on an iphone or kindle. the epub i looked at was a dictionary of the maori language by herbert w. williams. the nzetc has the sixth edition. there are two versions of the work on bookserver. a second edition scanned by google books (original at the new york public library) and a third edition scanned by the internet archive in association with microsoft (original in the university of california library system). all the processing of both works appear to be been done in the u.s. the original print used macrons (nzetc), acutes (google) and breves (internet archive) to mark long vowels. find them here. lets take a look at some entries from each, starting at 'kapukapu': nzetc: kapukapu. . n. sole of the foot. . apparently a synonym for kaunoti, the firestick which was kept steady with the foot. tena ka riro, i runga i nga hanga a taikomako, i te kapukapu, i te kaunoti (m. ). . v.i. curl (as a wave). ka kapukapu mai te ngaru. . gush. . gleam, glisten. katahi ki te huka o huiarau, kapukapu ana tera. kapua, n. . cloud, bank of clouds. e tutakitaki ana nga kapua o te rangi, kei runga te mangoroa e kopae pu ana (p.). . a flinty stone. = kapuarangi. . polyprion oxygeneios, a fish. = hapuku. . an edible species of fungus. . part of the titi pattern of tattooing. kapuarangi, n. a variety of matā, or cutting stone, of inferior quality. = kapua, . kāpuhi, kāpuhipuhi, n. cluster of branches at the top of a tree. kāpui, v.t. . gather up in a bunch. ka kapuitia nga rau o te kiekie, ka herea. . lace up or draw in the mouth of a bag. . earth up crops, or cover up embers with ashes to keep them alight. kāpuipui, v.t. gather up litter, etc. kāpuka, n. griselinia littoralis, a tree. = papauma. kapukiore, n. coprosma australis, a shrub. = kanono. kāpuku = kōpuku, n. gunwale. google books: kapukapu, s. sole of the foot, eldpukdpu, v. to curl* as a wave. ka kapukapu mai te ngaru; the wave curls over. kapunga, v. to take up with both hands held together, kapungatia he kai i te omu; take up food from the oven. (b. c, kapura, s. fire, -' tahuna he kapura ; kindle a fire. kapurangi, s. rubbish; weeds, kara, s. an old man, tena korua ko kara ? how are you and the old man ? kara, s> basaltic stone. he kara te kamaka nei; this stone is kara. karaha, s. a calabash. ♦kardhi, *. glass, internet archive: kapukapu, n. sole of the foot. kapukapu, v. i. . curl (as a wave). ka kapukapu mai te ngaru. . gush. kakapii, small basket for cooked food. kapua, n. cloud; hank of clouds, kapunga, n. palm of the hand. kapunga, \. t. take up in both hands together. kapiira, n. fire. kapiiranga, n. handful. kapuranga, v. t. take up by hand-fuls. kapurangatia nga otaota na e ia. v. i. dawn. ka kapuranga te ata. kapur&ngi, n. rubbish; uveds. i. k&r&, n. old man. tena korua ko kara. ii. k&r&, n. secret plan; conspiracy. kei te whakatakoto kara mo te horo kia patua. k&k&r&, d. scent; smell. k&k&r&, a. savoury; odoriferous. k^ar&, n. a shell-iish. unlike the other two, the nzetc version has accents, bold and italics in the right place. it' the only one with a workable and useful table of contents. it is also edition which has been extensively revised and expanded. google's second edition has many character errors, while the internet archive's third edition has many 'á' mis-recognised as '&.' the google and internet achive versions are also available as pdfs, but of course, without fancy tables of contents these pdfs are pretty challenging to navigate and because they're built from page images, they're huge. it's tempting to say that the nzetc version is better than either of the others, and from a naïve point of it is, but it's more accurate to say that it's different. it's a digitised version of a book revised more than a hundred years after the second edition scanned by google books. people who're interested in the history of the language are likely to pick the edition over the edition nine times out of ten. technical work is currently underway to enable third parties like the internet archive's bookserver to more easily redistribute our epubs. for some semi-arcane reasons it's linked to upcoming new search functionality. posted by stuart yeates at : no comments: labels: library, macrons, maori, nzetc what librarything metadata can the nzetc reasonable stuff inside it's cc'd epubs? this is the second blog following on from an excellent talk about librarything by librarything's tim given the vuw in wellington after his trip to lianza. the nzetc publishes all of it's works as epubs (a file format primarily aimed at mobile devices), which are literally processed crawls of it's website bundled with some metadata. for some of the nzetc works (such as erewhon and the life of captain james cook), librarything has a lot more metadata than the nzetc, becuase many librarything users have the works and have entered metadata for them. bundling as much metadata into the epubs makes sense, because these are commonly designed for offline use---call-back hooks are unlikely to be avaliable. so what kinds of data am i interested in? ) traditional bibliographic metadata. both lt and nzetc have this down really well. ) images. lt has many many cover images, nzetc has images of plates from inside many works too. ) unique identification (isbns, issns, work ids, etc). lt does very well at this, nzetc very poorly ) genre and style information. lt has tags to do fancy statistical analysis on, and does. nzetc has full text to do fancy statistical analysis on, but doesn't. ) intra-document links. lt has work as the smallest unit. nzetc reproduces original document tables of contents and indexes, cross references and annotations. ) inter-document links. lt has none. nzetc captures both 'mentions' and 'cites' relationships between documents. while most current-generation ebook readers, of course, can do nothing with most of this metadata, but i'm looking forward to the day when we have full-fledged openurl resolvers which can do interesting things, primarily picking the best copy (most local / highest quality / most appropiate format / cheapest) of a work to display to a user; and browsing works by genre (librarything does genre very well, via tags). posted by stuart yeates at : comment: labels: epubs, library, librarything, nzetc thursday, october interlinking of collections: the quest continues after an excellent talk today about librarything by librarything's tim, i got enthused to see how librarything stacks up against other libraries for having matches in it's authority control system for entities we (the nzetc) care about. the answer is averagely. for copies of printed books less than a hundred years old (or reprinted in the last hundred years), and their authors, librarything seems to do every well. these are the books likely to be in active circulation in personal libraries, so it stands to reason that these would be well covered. i tried half a dozen books from our nineteenth-century novels collection, and most were missing, erewhon, of course, was well represented. librarything doesn't have the "treaty of waitangi" (a set of manuscripts) but it does have "facsimiles of the treaty of waitangi." it's not clear to me whether these would be merged under their cataloguing rules. coverage of non-core bibliographic entities was lacking. places get a little odd. sydney is "http://www.librarything.com/place/sydney,% new% south% wales,% australia" but wellington is "http://www.librarything.com/place/wellington" and anzac cove appears to be is missing altogether. this doesn't seem like a sane authority control system for places, as far as i can see. people who are the subjects rather than the authors of books didn't come out so well. i couldn't find abel janszoon tasman, pōtatau te wherowhero or charles frederick goldie, all of which are near and dear to our hearts. here is the spreadsheet of how different web-enabled systems map entities we care about. correction: it seems that the correct url for wellington is http://www.librarything.com/place/wellington,% new% zealand which brings sanity back. posted by stuart yeates at : no comments: labels: authority, community building, metadata, semantic web, social network, taxonomy saturday, september ebook readers need openurl resolvers everyone's talking about the next generation of ebook readers having larger reading area, more battery life and more readable screen. i'd give up all of those, however, for an ebook reader that had an internal openurl resolver. openurl is the nifty protocol that libraries use to find the closest copy of a electronic resources and direct patrons to copies that the library might have already licensed from commercial parties. it's all about finding the version of a resource that is most accessible to the user, dynamically. say i've loaded ebooks into my ebook reader: a couple of encyclopedias and dictionaries; a stack of books i was meant to read in school but only skimmed and have been meaning to get back to; current block-busters; guidebooks to the half-dozen countries i'm planning on visiting over the next couple of years; classics i've always meant to read (tolstoy, chaucer, cervantes, plato, descartes, nietzsche); and local writers (baxter, duff, ihimaera, hulme, ...). my ebooks by nietzsche are going to refer to books by descartes and plato; my ebooks by descartes are going to refer to books by plato; my encyclopaedias are going to refer to pretty much everything; most of the works in translation are going to contain terms which i'm going to need help with (help which theencyclopedias and dictionaries can provide). ask yourself, though, whether you'd want to flick between works on the current generation of readers---very painful, since these devices are not designed for efficient navigation between ebooks, but linear reading of them. you can't follow links between them, of course, because on current systems links must point either with the same ebook or out on to the internet---pointing to other ebooks on the same device is verboten. openurl can solve this by catching those urls and making them point to local copies of works (and thus available for free even when the internet is unavailable) where possible while still retaining their until ebook readers have a mechanism like this ebooks will be at most a replacement only for paperback novels---not personal libraries. posted by stuart yeates at : comment: older posts home subscribe to: posts (atom) about me stuart yeates view my complete profile blog archive ▼  ( ) ▼  march ( ) #christchurchmosqueshootings ►  ( ) ►  october ( ) ►  ( ) ►  october ( ) ►  ( ) ►  july ( ) ►  ( ) ►  june ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  august ( ) ►  june ( ) ►  march ( ) ►  ( ) ►  november ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) shared google reader items simple theme. powered by blogger. none disruptive library technology jester disruptive library technology jester we're disrupted, we're librarians, and we're not going to take it anymore dltj now uses webmention and bridgy to aggregate social media commentary when i converted this blog from wordpress to a static site generated with jekyll in , i lost the ability for readers to make comments. at the time, i thought that one day i would set up an installation of discourse for comments like boing boing did in . but i never found the time to do that. alternatively, i could do what npr has done— abandon comments on its site in favor of encouraging people to use twitter and facebook—but that means blog readers don’t see where the conversation is happening. this article talks about indieweb—a blog-to-blog communication method—and the pieces needed to make it work on both a static website and for social-media-to-blog commentary. the indieweb is a combination of html markup and an http protocol for capturing discussions between blogs. to participate in the indieweb ecosystem, a blog needs to support the “ h-card” and “ h-entry” microformats. these microformats are ways to add html markup to a site to be read and recognized by machines. if you follow the instructions at indiewebify.me, the “level ” steps will check your site’s webpages for the appropriate markup. the jekyll theme i use here, minimal-mistakes, didn’t include the microformat markup, so i made a pull request to add it. with the markup in place, dltj.org uses the webmention protocol to notify others when i link to their content and receive notifications from others. if you’re setting this up for yourself, hopefully someone has already gone through the effort of adding the necessary webmention communication bits to your blog software. since dltj is a static website, i’m using the webmention.io service to send and receive webmention information on behalf of dltj.org and a jekyll plugin called jekyll-webmention_io to integrate webmention data into my blog’s content. the plugin gets that data from webmention.io, caches it locally, and builds into each article the list of webmentions and pingbacks (another kind of blog-to-blog communication protocol) received. webmention.io and jekyll-webmention_io will capture some commentary. to get comments from twitter, mastodon, facebook, and elsewhere, i added the bridgy service to the mix. from their about page : “bridgy periodically checks social networks for responses to your posts and links to your web site and sends them back to your site as webmentions.” so all of that commentary gets fed back into the blog post as well. i’ve just started using this webmention/bridgy setup, so i may have some pieces misconfigured. i’ll be watching over the next several blog posts to make sure everything is working. if you notice something that isn’t working, please reach out to me via one of the mechanisms listed in the sidebar of this site. digital repository software: how far have we come? how far do we have to go? bryan brown’s tweet led me to ruth kitchin tillman’s repository ouroboros post about the treadmill of software development/deployment. and wow do i have thoughts and feelings. ouroboros: an ancient symbol depicting a serpent or dragon eating its own tail. or—in this context—constantly chasing what you can never have. source: wikipedia let’s start with feelings. i feel pain and misery in reading ruth’s post. as bryan said in a subsequent tweet, i’ve been on both sides: a system maintainer watching much-needed features put off to major software updates (or rewrites) and the person participating in decisions to put off feature development in favor of major updates and rewrites. it is a bit like a serpent chasing its tail (a reference to “ouroboros” in ruth’s post title)—as someone who just wants a workable, running system, it seems like a never-ending quest to get what my users need. i think it will get better. i offer as evidence the fact that almost all of us can assume network connectivity. that certainly wasn’t always the case: routers used to break, file servers crash would under stress, network drivers go out of date at inopportune times. now we take network connectivity for granted—almost (almost!) as if it a utility as common as water and electricity. we no longer have to chase our tail to assume those things. when we make those assumptions, we push that technology down the stack and layer on new things. only after electricity is reliable can we layer on network connectivity. with reliable network connectivity, we layer on—say—digital repositories. each layer goes through its own refinement process…getting better and better as it relies on the layers below it. are digital repositories as reliable as printed books? no way! without electricity and network connectivity, we can’t have digital repositories but we can still use books. will there come a time when digital repositories are as reliable as electricity and network connectivity? that sounds like a star trek world, but if history is our guide, i think the profession will get there. (i’m not necessarily saying i’ll get there with it—such reliability is probably outside my professional lifetime.) so, yeah, i feel pain and misery in ruth’s post about the achingly out-of-reach nature of repository software that can be pushed down the stack…that can be assumed to exist with all of the capabilities that our users need. that brings me around to one of bryan’s tweets: if the idea of a digital preservation platform is that it is purpose-built to preserve assets for a long period of time, then isn&# ;t it an obvious design flaw to build it with an eol in mind? if the system is no longer supported, then can it really be trusted for preservation?— bryan j. brown (@bryjbrown) june , can digital repositories really be trusted in-and-of-themselves? no. (not yet?) that isn’t to say that steps aren’t being made. take, for example, http and html. those are getting pretty darn reliable, and assumptions can be built that rely on html as a markup language and http as a protocol to move it around the network. i think that is a driver behind the growth of “static websites”—systems that rely on nothing more than delivering html and other files over http. the infrastructure for doing that—servers, browsers, caching, network connectivity, etc.—is all pretty sound. html and http have also stood the test of time—much like how we assume we will always understand how to process tiff files for images. now there are many ways to generate static websites. this blog uses markdown text files and jekyll as a pre-processor to create a stand-alone folder of html and supporting files. a more sophisticated method might use drupal as a content management system that exports to a static site. jekyll and drupal are nowhere near as assumed-to-work as html and http, but they work well as mechanisms for generating a static site. last year, colleagues from the university of iowa published a paper about making a static site front-end to contentdm in the code lib journal, which could be the basis of a digital collection website development. so if your digital repository creates html to be served over http and—for the purposes of preservation—the metadata can be encoded in html structures that are readily machine-processable? well, then you might be getting pretty close to a system you can trust. but what about the digital objects themselves. back in , i crowed about the ability of fedora repository software to recover itself just based on the files stored to disk. (read the article for more details…it has the title “why fedora? because you don’t need fedora” in case that might make it more enticing to read.) fedora used a bespoke method of saving digital objects as a series of files on disk, and the repository software provided commands to rebuild the repository database from those files. that worked for fedora up to version . for fedora version , some of the key object metadata only existed in the repository database. from what i understand of version and beyond, fedora adopted the oxford common file layout (ocfl), “an application-independent approach to the storage of digital information in a structured, transparent, and predictable manner.” the ocfl website goes on to say: “it is designed to promote long-term object management best practices within digital repositories.” so fedora is back again in a state where you could rebuild the digital object repository system from a simple filesystem backup. the repository software becomes a way of optimizing access to the underlying digital objects. will ocfl stand the test of time like html, http, tiff, network connectivity, and electricity? only time will tell. so i think we are getting closer. it is possible to conceive of a system that uses simple files and directories as long-term preservation storage. those can be backed up and duplicated using a wide variety of operating systems and tools. we also have examples of static sites of html delivered over http that various tools can create and many, many programs can deliver and render. we’re missing some key capabilities—access control comes to mind. i, for one, am not ready to push javascript very far down our stack of technologies—certainly not as far as html—but javascript robustness seems to be getting better over time. ruth: i’m sorry this isn’t easy and that software creators keep moving the goalposts. (i’ll put myself in the “software creator” category.) we could be better at setting expectations and delivering on them. (there is probably another lengthy blog post in how software development is more “art” than it is “engineering”.) developers—the ones fortunate to have the ability and permission to think long term—are trying to make new tools/techniques good enough to push down the stack of assumed technologies. we’re clearly not there for digital repository software, but…hopefully…we are moving in the right direction. thoughts on growing up it ‘tis the season for graduations, and this year my nephew is graduating from high school. my sister-in-law created a memory book—”a surprise book of advice as he moves to the next phase of his life.” what an interesting opportunity to reflect! this is what i came up with: sometime between when i became an adult and now, the word “adulting” was coined. my generation just called it “growing up.” the local top- radio station uses “hashtag-adulting” to mean all of those necessary life details that now become your own responsibility. (“hashtag” is something new, too, for what that’s worth.) growing up is more than life necessities, though. this is an exciting phase of life that you’ve built up to—many more doors of possibilities are opening and now you get to pick which ones to go through. pick carefully. each door you go through starts to close off others. pick many. use this life stage to try many things to find what is fun and what is meaningful (and aim for both fun and meaningful). you are on a solid foundation, and i’m eager to see what you discover “adulting” means to you. more thoughts on pre-recording conference talks over the weekend, i posted an article here about pre-recording conference talks and sent a tweet about the idea on monday. i hoped to generate discussion about recording talks to fill in gaps—positive and negative—about the concept, and i was not disappointed. i’m particularly thankful to lisa janicke hinchliffe and andromeda yelton along with jason griffey, junior tidal, and edward lim junhao for generously sharing their thoughts. daniel s and kate deibel also commented on the code lib slack team. i added to the previous article’s bullet points and am expanding on some of the issues here. i’m inviting everyone mentioned to let me know if i’m mischaracterizing their thoughts, and i will correct this post if i hear from them. (i haven’t found a good comments system to hook into this static site blog.) pre-recorded talks limit presentation format lisa janicke hinchliffe made this point early in the feedback: @datag for me downside is it forces every session into being a lecture. for two decades cfps have emphasized how will this season be engaging/not just a talking head? i was required to turn workshops into talks this year. even tho tech can do more. not at all best pedagogy for learning— lisa janicke hinchliffe (@lisalibrarian) april , jason described the “flipped classroom” model that he had in mind as the nisoplus program was being developed. the flipped classroom model is one where students do the work of reading material and watching lectures, then come to the interactive time with the instructors ready with questions and comments about the material. rather than the instructor lecturing during class time, the class time becomes a discussion about the material. for nisoplus, “the recording is the material the speaker and attendees are discussing” during the live zoom meetings. in the previous post, i described how the speaker could respond in text chat while the recording replay is beneficial. lisa went on to say: @datag q+a is useful but isn't an interactive session. to me, interactive = participants are co-creating the session, not watching then commenting on it.— lisa janicke hinchliffe (@lisalibrarian) april , she described an example: the ssp preconference she ran at chs. i’m paraphrasing her tweets in this paragraph. the preconference had a short keynote and an “oprah-style” panel discussion (not pre-prepared talks). this was done live; nothing was recorded. after the panel, people worked in small groups using zoom and a set of google slides to guide the group work. the small groups reported their discussions back to all participants. andromeda points out (paraphrasing twitter-speak): “presenters will need much more— and more specialized—skills to pull it off, and it takes a lot more work.” and lisa adds: “just so there is no confusion … i don’t think being online makes it harder to do interactive. it’s the pre-recording. interactive means participants co-create the session. a pause to chat isn’t going to shape what comes next on the recording.” increased technical burden on speakers and organizers @thatandromeda @datag totally agree on this. i had to pre-record a conference presentation recently and it was a terrible experience, logistically. i feel like it forces presenters to become video/sound editors, which is obviously another thing to worry about on top of content and accessibility.— junior tidal (@juniortidal) april , andromeda also agreed with this: “i will say one of the things i appreciated about niso is that @griffey did all the video editing, so i was not forced to learn how that works.” she continued, “everyone has different requirements for prerecording, and in [code lib’s] case they were extensive and kept changing.” and later added: “part of the challenge is that every conference has its own tech stack/requirements. if as a presenter i have to learn that for every conference, it’s not reducing my workload.” it is hard not to agree with this; a high-quality (stylistically and technically) recording is not easy to do with today’s tools. this is also a technical burden for meeting organizers. the presenters will put a lot of work into talks—including making sure the recordings look good; whatever playback mechanism is used has to honor the fidelity of that recording. for instance, presenters who have gone through the effort to ensure the accessibility of the presentation color scheme want the conference platform to display the talk “as i created it.” the previous post noted that recorded talks also allow for the creation of better, non-real-time transcriptions. lisa points out that presenters will want to review that transcription for accuracy, which jason noted adds to the length of time needed before the start of a conference to complete the preparations. increased logistical burden on presenters @thatandromeda @datag @griffey even if prep is no more than the time it would take to deliver live (which has yet to be case for me and i'm good at this stuff), it is still double the time if you are expected to also show up live to watch along with everyone else.— lisa janicke hinchliffe (@lisalibrarian) april , this is a consideration i hadn’t thought through—that presenters have to devote more clock time to the presentation because first they have to record it and then they have to watch it. (or, as andromeda added, “significantly more than twice the time for some people, if they are recording a bunch in order to get it right and/or doing editing.”) no. audience. reaction. @datag @griffey ) no. audience. reaction. i give a joke and no one laughs. was it funny? was it not funny? talks are a *performance* and a *relationship*; i'm getting energy off the audience, i'm switching stuff on the fly to meet their vibe. prerecorded/webinar is dead. feels like i'm bombing.— andromeda yelton (@thatandromeda) april , wow, yes. i imagine it would take a bit of imagination to get in the right mindset to give a talk to a small camera instead of an audience. i wonder how stand-up comedians are dealing with this as they try to put on virtual shows. andromeda summed this up: @datag @griffey oh and i mean ) i don't get tenure or anything for speaking at conferences and goodness knows i don't get paid. so the entire benefit to me is that i enjoy doing the talk and connect to people around it. prerecorded talk + f f conf removes one of these; online removes both.— andromeda yelton (@thatandromeda) april , also in this heading could be “no speaker reaction”—or the inability for subsequent speakers at a conference to build on something that someone said earlier. in the code lib slack team, daniel s noted: “one thing comes to mind on the pre-recording [is] the issue that prerecorded talks lose the ‘conversation’ aspect where some later talks at a conference will address or comment on earlier talks.” kate deibel added: “exactly. talks don’t get to spontaneously build off of each other or from other conversations that happen at the conference.” currency of information lisa points out that pre-recording talks before en event means there is a delay between the recording and the playback. in the example she pointed out, there was a talk at rluk that pre-recorded would have been about the university of california working on an open access deal with elsevier; live, it was able to be “the deal we announced earlier this week”. conclusions? near the end of the discussion, lisa added: @datag @griffey @thatandromeda i also recommend going forward that the details re what is required of presenters be in the cfp. it was one thing for conferences that pivoted (huge effort!) but if you write the cfp since the pivot it should say if pre-record, platform used, etc.— lisa janicke hinchliffe (@lisalibrarian) april , …and andromeda added: “strong agree here. i understand that this year everyone was making it up as they went along, but going forward it’d be great to know that in advance.” that means conferences will need to take these needs into account well before the call for proposals (cfp) is published. a conference that is thinking now about pre-recording their talks must work through these issues and set expectations with presenters early. as i hoped, the twiter replies tempered my eagerness for the all-recorded style with some real-world experience. there could be possibilities here, but adapting face-to-face meetings to a world with less travel won’t be simple and will take significant thought beyond the issues of technology platforms. edward lim junhao summarized this nicely: “i favor unpacking what makes up our prof conferences. i’m interested in recreating that shared experience, the networking, & the serendipity of learning sth you didn’t know. i feel in-person conferences now have to offer more in order to justify people traveling to attend them.” related, andromeda said: “also, for a conf that ultimately puts its talks online, it’s critical that it have something beyond content delivery during the actual conference to make it worth registering rather than just waiting for youtube. realtime interaction with the speaker is a pretty solid option.” if you have something to add, reach out to me on twitter. given enough responses, i’ll create another summary. let’s keep talking about what that looks like and sharing discoveries with each other. the tree of tweets it was a great discussion, and i think i pulled in the major ideas in the summary above. with some guidance from ed summers, i’m going to embed the twitter threads below using treeverse by paul butler. we might be stretching the boundaries of what is possible, so no guarantees that this will be viewable for the long term. should all conference talks be pre-recorded? the code lib conference was last week. that meeting used all pre-recorded talks, and we saw the benefits of pre-recording for attendees, presenters, and conference organizers. should all talks be pre-recorded, even when we are back face-to-face? note! after i posted a link to this article on twitter, there was a great response of thoughtful comments. i've included new bullet points below and summarized the responses in another blog post. as an entirely virtual conference, i think we can call code lib a success. success ≠ perfect, of course, and last week the conference coordinating team got together on a zoom call for a debriefing session. we had a lengthy discussion about what we learned and what we wanted to take forward to the conference, which we’re anticipating will be something with a face-to-face component. that last sentence was tough to compose: “…will be face-to-face”? “…will be both face-to-face and virtual”? (or another fully virtual event?) truth be told, i don’t think we know yet. i think we know with some certainty that the covid pandemic will become much more manageable by this time next year—at least in north america and europe. (code lib draws from primarily north american library technologists with a few guests from other parts of the world.) i’m hearing from higher education institutions, though, that travel is going to be severely curtailed…if not for health risk reasons, then because budgets have been slashed. so one has to wonder what a conference will look like next year. i’ve been to two online conferences this year: nisoplus and code lib. both meetings recorded talks in advance and started playback of the recordings at a fixed point in time. this was beneficial for a couple of reasons. for organizers and presenters, pre-recording allowed technical glitches to be worked through without the pressure of a live event happening. technology is not nearly perfect enough or ubiquitously spread to count on it working in real-time. nisoplus also used the recordings to get transcribed text for the videos. (code lib used live transcriptions on the synchronous playback.) attendees and presenters benefited from pre-recording because the presenters could be in the text chat channel to answer questions and provide insights. having the presenter free during the playback offers new possibilities for making talks more engaging: responding in real-time to polls, getting forehand knowledge of topics for subsequent real-time question/answer sessions, and so forth. the synchronous playback time meant that there was a point when (almost) everyone was together watching the same talk—just as in face-to-face sessions. during the code lib conference coordinating debrief call, i asked the question: “if we saw so many benefits to pre-recording talks, do we want to pre-record them all next year?” in addition to the reasons above, pre-recorded talks benefit those who are not comfortable speaking english or are first-time presenters. (they have a chance to re-do their talk as many times as they need in a much less stressful environment.) “live” demos are much smoother because a recording can be restarted if something goes wrong. each year, at least one presenter needs to use their own machine (custom software, local development environment, etc.), and swapping out presenter computers in real-time is risky. and it is undoubtedly easier to impose time requirements with recorded sessions. so why not pre-record all of the talks? i get it—it would be different to sit in a ballroom watching a recording play on big screens at the front of the room while the podium is empty. but is it so different as to dramatically change the experience of watching a speaker at a podium? in many respects, we had a dry-run of this during code lib . it was at the early stages of the coming lockdowns when institutions started barring employee travel, and we had to bring in many presenters remotely. i wrote a blog post describing the setup we used for remote presenters, and at the end, i said: i had a few people comment that they were taken aback when they realized that there was no one standing at the podium during the presentation. some attendees, at least, quickly adjusted to this format. for those with the means and privilege of traveling, there can still be face-to-face discussions in the hall, over meals, and social activities. for those that can’t travel (due to risks of traveling, family/personal responsibilities, or budget cuts), the attendee experience is a little more level—everyone is watching the same playback and in the same text backchannels during the talk. i can imagine a conference tool capable of segmenting chat sessions during the talk playback to “tables” where you and close colleagues can exchange ideas and then promote the best ones to a conference-wide chat room. something like that would be beneficial as attendance grows for events with an online component, and it would be a new form of engagement that isn’t practical now. there are undoubtedly reasons not to pre-record all session talks (beyond the feels-weird-to-stare-at-an-unoccupied-ballroom-podium reasons). during the debriefing session, one person brought up that having all pre-recorded talks erodes the justification for in-person attendance. i can see a manager saying, “all of the talks are online…just watch it from your desk. even your own presentation is pre-recorded, so there is no need for you to fly to the meeting.” that’s legitimate. so if you like bullet points, here’s how it lays out. pre-recording all talks is better for: accessibility: better transcriptions for recorded audio versus real-time transcription (and probably at a lower cost, too) engagement: the speaker can be in the text chat during playback, and there could be new options for backchannel discussions better quality: speakers can re-record their talk as many times as needed closer equality: in-person attendees are having much the same experience during the talk as remote attendees downsides for pre-recording all talks: feels weird: yeah, it would be different erodes justification: indeed a problem, especially for those for whom giving a speech is the only path to getting the networking benefits of face-to-face interaction limits presentation format: it forces every session into being a lecture. for two decades cfps have emphasized how will this season be engaging/not just a talking head? (lisa janicke hinchliffe) increased technical burden on speaker and organizers: conference organizers asking presenters to do their own pre-recording is a barrier (junior tidal), and organizers have added new requirements for themselves no audience feedback: pre-recording forces the presenter into an unnatural state relative to the audience (andromeda yelton) currency of information: pre-recording talks before en event naturally introduces a delay between the recording and the playback. (lisa janicke hinchliffe) i’m curious to hear of other reasons, for and against. reach out to me on twitter if you have some. the covid- pandemic has changed our society and will undoubtedly transform it in ways that we can’t even anticipate. is the way that we hold professional conferences one of them? can we just pause for a moment and consider the decades of work and layers of technology that make a modern teleconference call happen? for you younger folks, there was a time when one couldn’t assume the network to be there. as in: the operating system on your computer couldn’t be counted on to have a network stack built into it. in the earliest years of my career, we were tickled pink to have macintoshes at the forefront of connectivity through gatorboxes. go read the first paragraph of that wikipedia article on gatorboxes…tcp/ip was tunneled through localtalk running over phonenet on unshielded twisted pairs no faster than about kbit/second. (and we loved it!) now the network is expected; needing to know about tcp/ip is pushed so far down the stack as to be forgotten…assumed. sure, the software on top now is buggy and bloated—is my zoom client working? has zoom’s service gone down?—but the network…we take that for granted. &# ; user behavior access controls at a library proxy server are okay earlier this month, my twitter timeline lit up with mentions of a half-day webinar called cybersecurity landscape - protecting the scholarly infrastructure. what had riled up the people i follow on twitter was the first presentation: “security collaboration for library resource access” by cory roach, the chief information security officer at the university of utah. many of the tweets and articles linked in tweets were about a proposal for a new round of privacy-invading technology coming from content providers as a condition of libraries subscribing to publisher content. one of the voices that i trust was urging caution: i highly recommend you listen to the talk, which was given by a university cio, and judge if this is a correct representation. fwiw, i attended the event and it is not what i took away.— lisa janicke hinchliffe (@lisalibrarian) november , as near as i can tell, much of the debate traces back to this article: scientific publishers propose installing spyware in university libraries to protect copyrights - coda story https://t.co/rtcokiukbf— open access tracking project (@oatp) november , the article describes cory’s presentation this way: one speaker proposed a novel tactic publishers could take to protect their intellectual property rights against data theft: introducing spyware into the proxy servers academic libraries use to allow access to their online services, such as publishers’ databases. the “spyware” moniker is quite scary. it is what made me want to seek out the recording from the webinar and hear the context around that proposal. my understanding (after watching the presentation) is that the proposal is not nearly as concerning. although there is one problematic area—the correlation of patron identity with requested urls—overall, what is described is a sound and common practice for securing web applications. to the extent that it is necessary to determine a user’s identity before allowing access to licensed content (an unfortunate necessity because of the state of scholarly publishing), this is an acceptable proposal. (through the university communications office, corey published a statement about the reaction to his talk.) in case you didn’t know, a web proxy server ensures the patron is part of the community of licensed users, and the publisher trusts requests that come through the web proxy server. the point of cory’s presentation is that the username/password checking at the web proxy server is a weak form of access control that is subject to four problems: phishing (sending email to tricking a user into giving up their username/password) social engineering (non-email ways of tricking a user into giving up their username/password) credential reuse (systems that are vulnerable because the user used the same password in more than one place) hactivism (users that intentionally give out their username/password so others can access resources) right after listing these four problems, cory says: “but anyway we look at it, we can safely say that this is primarily a people problem and the technology alone is not going to solve that problem. technology can help us take reasonable precautions… so long as the business model involves allowing access to the data that we’re providing and also trying to protect that same data, we’re unlikely to stop theft entirely.” his proposal is to place “reasonable precautions” in the web proxy server as it relates to the campus identity management system. this is a slide from his presentation: slide from presentation by cory roach i find this layout (and lack of labels) somewhat confusing, so i re-imagined the diagram as this: revised 'modern library design' the core of cory’s presentation is to add predictive analytics and per-user blocking automation to the analysis of the log files from the web proxy server and the identity management server. by doing so, the university can react quicker to compromised usernames and passwords. in fact, it could probably do so more quicker than the publisher could do with its own log analysis and reporting back to the university. where cory runs into trouble is this slide: slide from presentation by cory roach in this part of the presentation, cory describes the kinds of patron-identifying data that the university could-or-would collect and analyze to further the security effort. in search engine optimization, these sorts of data points are called “signals” and are used to improve the relevance of search results; perhaps there is an equivalent term in access control technology. but for now, i’ll just call them “signals”. there are some problems in gathering these signals—most notably the correlation between user identity and “urls requested”. in the presentation, he says: “you can also move over to behavioral stuff. so it could be, you know, why is a pharmacy major suddenly looking up a lot of material on astrophysics or why is a medical professional and a hospital suddenly interested in internal combustion. things that just don’t line up and we can identify fishy behavior.” it is core to the library ethos that we make our best effort to not track what a user is interested in—to not build a profile of a user’s research unless they have explicitly opted into such data collection. as librarians, we need to gracefully describe this professional ethos and work that into the design of the systems used on campus (and at the publishers). still, there is much to be said for using some of the other signals to analyze whether a particular request is from an authorized community member. for instance, cory says: “we commonly see this user coming in from the us and today it’s coming in from botswana. you know, has there been enough time that they could have traveled from the us to botswana and actually be there? have they ever access resources from that country before is there residents on record in that country?” the best part of what cory is proposing is that the signals’ storage and processing is at the university and not at the publisher. i’m not sure if cory knew this, but a recent version of ezproxy added a usagelimit directive that builds in some of these capabilities. it can set per-user limits based on the number of page requests or the amount of downloaded information over a specified interval. one wonders if somewhere in oclc’s development queue is the ability to detect ip addresses from multiple networks (geographic detection) and browser differences across a specified interval. still, pushing this up to the university’s identity provider allows for a campus-wide view of the signals…not just the ones coming through the library. also, in designing the system, there needs to be clarity about how the signals are analyzed and used. i think cory knew this as well: “we do have to be careful about not building bias into the algorithms.” yeah, the need for this technology sucks. although it was the tweet to the coda story about the presentation that blew up, the thread of the story goes through techdirt to a tangential paragraph from netzpolitik in an article about germany’s licensing struggle with elsevier. with this heritage, any review of the webinar’s ideas are automatically tainted by the disdain the library community in general has towards elsevier. it is reality—an unfortunate reality, in my opinion—that the traditional scholarly journal model has publishers exerting strong copyright protection on research and ideas behind paywalls. (wouldn’t it be better if we poured the anti-piracy effort into improving scholarly communication tools in an open access world? yes, but that isn’t the world we live in.) almost every library deals with this friction by employing a web proxy server as an agent between the patron and the publisher’s content. the netzpolitik article says: …but relies on spyware in the fight against „cybercrime“ of course, sci-hub and other shadow libraries are a thorn in elsevier’s side. since they have existed, libraries at universities and research institutions have been much less susceptible to blackmail. their staff can continue their research even without a contract with elsevier. instead of offering transparent open access contracts with fair conditions, however, elsevier has adopted a different strategy in the fight against shadow libraries. these are to be fought as „cybercrime“, if necessary also with technological means. within the framework of the „scholarly networks security initiative (snsi)“, which was founded together with other large publishers, elsevier is campaigning for libraries to be upgraded with security technology. in a snsi webinar entitled „cybersecurity landscape – protecting the scholarly infrastructure“*, hosted by two high-ranking elsevier managers, one speaker recommended that publishers develop their own proxy or a proxy plug-in for libraries to access more (usage) data („develop or subsidize a low cost proxy or a plug-in to existing proxies“). with the help of an „analysis engine“, not only could the location of access be better narrowed down, but biometric data (e.g. typing speed) or conspicuous usage patterns (e.g. a pharmacy student suddenly interested in astrophysics) could also be recorded. any doubts that this software could also be used—if not primarily—against shadow libraries were dispelled by the next speaker. an ex-fbi analyst and it security consultant spoke about the security risks associated with the use of sci-hub. the other commentary that i saw was along similar lines: [is the snsi the new prism? bjoern.brembs.blog](http://bjoern.brembs.net/ / /is-the-snsi-the-new-prism/) [academics band together with publishers because access to research is a cybercrime chorasimilarity](https://chorasimilarity.wordpress.com/ / / /academics-band-together-with-publishers-because-access-to-research-is-a-cybercrime/) [whois behind snsi & getftr? motley marginalia](https://csulb.edu/~ggardner/ / / /snsi-getftr/) let’s face it: any friction beyond follow-link-to-see-pdf is more friction than a researcher deserves. i doubt we would design a scholarly communication system this way were we to start from scratch. but the system is built on centuries of evolving practice, organizations, and companies. it really would be a better world if we didn’t have to spend time and money on scholarly publisher paywalls. and i’m grateful for the open access efforts that are pivoting scholarly communications into an open-to-all paradigm. that doesn’t negate the need to provide better options for content that must exist behind a paywall. so what is this snsi thing? the webinar where cory presented was the first mention i’d seen of a new group called the scholarly networks security initiative (snsi). snsi is the latest in a series of publisher-driven initiatives to reduce the paywall’s friction for paying users or library patrons coming from licensing institutions. getftr (my thoughts) and seamless access (my thoughts). (disclosure: i’m serving on two working groups for seamless access that are focused on making it possible for libraries to sensibly and sanely integrate the goals of seamless access into campus technology and licensing contracts.) interestingly, while the seamless access initiative is driven by a desire to eliminate web proxy servers, this snsi presentation upgrades a library’s web proxy server and makes it a more central tool between the patron and the content. one might argue that all access on campus should come through the proxy server to benefit from this kind of access control approach. it kinda makes one wonder about the coordination of these efforts. still, snsi is on my radar now, and i think it will be interesting to see what the next events and publications are from this group. as a cog in the election system: reflections on my role as a precinct election official i may nod off several times in composing this post the day after election day. hopefully, in reading it, you won’t. it is a story about one corner of democracy. it is a journal entry about how it felt to be a citizen doing what i could do to make other citizens’ voices be heard. it needed to be written down before the memories and emotions are erased by time and naps. yesterday i was a precinct election officer (peo—a poll worker) for franklin county—home of columbus, ohio. it was my third election as a peo. the first was last november, and the second was the election aborted by the onset of the coronavirus in march. (not sure that second one counts.) it was my first as a voting location manager (vlm), so i felt the stakes were high to get it right. would there be protests at the polling location? would i have to deal with people wearing candidate t-shirts and hats or not wearing masks? would there be a crash of election observers, whether official (scrutinizing our every move) or unofficial (that i would have to remove)? it turns out the answer to all three questions was “no”—and it was a fantastic day of civic engagement by peos and voters. there were well-engineered processes and policies, happy and patient enthusiasm, and good fortune along the way. this story is going to turn out okay, but it could have been much worse. because of the complexity of the election day voting process, last year franklin county started allowing peos to do some early setup on monday evenings. the early setup started at o’clock. i was so anxious to get it right that the day before i took the printout of the polling room dimensions from my vlm packet, scanned it into omnigraffle on my computer, and designed a to-scale diagram of what i thought the best layout would be. the real thing only vaguely looked like this, but it got us started. what i imagined our polling place would look like we could set up tables, unpack equipment, hang signs, and other tasks that don’t involve turning on machines or breaking open packets of ballots. one of the early setup tasks was updating the voters’ roster on the electronic poll pads. as happened around the country, there was a lot of early voting activity in franklin county, so the update file must have been massive. the electronic poll pads couldn’t handle the update; they hung at step -of- for over an hour. i called the board of elections and got ahold of someone in the equipment warehouse. we tried some of the simple troubleshooting steps, and he gave me his cell phone number to call back if it wasn’t resolved. by : , everything was done except for the poll pad updates, and the other peos were wandering around. i think it was o’clock when i said everyone could go home while the two voting location deputies and i tried to get the poll pads working. i called the equipment warehouse and we hung out on the phone for hours…retrying the updates based on the advice of the technicians called in to troubleshoot. i even “went rogue” towards the end. i searched the web for the messages on the screen to see if anyone else had seen the same problem with the poll pads. the electronic poll pad is an ipad with a single, dedicated application, so i even tried some ipad reset options to clear the device cache and perform a hard reboot. nothing worked—still stuck at step -of- . the election office people sent us home at o’clock. even on the way out the door, i tried a rogue option: i hooked a portable battery to one of the electronic polling pads to see if the update would complete overnight and be ready for us the next day. it didn’t, and it wasn’t. text from board of elections polling locations in ohio open at : in the morning, and peos must report to their sites by : . so i was up at : for a quick shower and packing up stuff for the day. early in the setup process, the board of elections sent a text that the electronic poll pads were not going to be used and to break out the “bumper packets” to determine a voter’s eligibility to vote. at some point, someone told me what “bumper” stood for. i can’t remember, but i can imagine it is back-up-something-something. “never had to use that,” the trainers told me, but it is there in case something goes wrong. well, it is the year , so was something going to go wrong? fortunately, the roster judges and one of the voting location deputies tore into the bumper packet and got up to speed on how to use it. it is an old fashioned process: the voter states their name and address, the peo compares that with the details on the paper ledger, and then asks the voter to sign beside their name. with an actual pen…old fashioned, right? the roster judges had the process down to a science. they kept the queue of verified voters full waiting to use the ballot marker machines. the roster judges were one of my highlights of the day. and boy did the voters come. by the time our polling location opened at : in the morning, they were wrapped around two sides of the building. we were moving them quickly through the process: three roster tables for checking in, eight ballot-marking machines, and one ballot counter. at our peak capacity, i think we were doing to voters an hour. as good as we were doing, the line never seemed to end. the franklin county board of elections received a grant to cover the costs of two greeters outside that helped keep the line orderly. they did their job with a welcoming smile, as did our inside greeter that offered masks and a squirt of hand sanitizer. still, the voters kept back-filling that line, and we didn’t see a break until : . the peos serving as machine judges were excellent. this was the first time that many voters had seen the new ballot equipment that franklin county put in place last year. i like this new equipment: the ballot marker prints your choices on a card that it spits out. you can see and verify your choices on the card before you slide it into a separate ballot counter. that is reassuring for me, and i think for most voters, too. but it is new, and it takes a few extra moments to explain. the machine judges got the voters comfortable with the new process. and some of the best parts of the day were when they announced to the room that a first-time voter had just put their card into the ballot counter. we would all pause and cheer. the third group of peos at our location were the paper table judges. they handle all of the exceptions. someone wants to vote with a pre-printed paper ballot rather than using a machine? to the paper table! the roster shows that someone requested an absentee ballot? that voter needs to vote a “provisional” ballot that will be counted at the board of elections office if the absentee ballot isn’t received in the mail. the paper table judges explain that with kindness and grace. in the wrong location? the paper table judges would find the correct place. the two paper table peos clearly had experience helping voters with the nuances of election processes. rounding out the team were two voting location deputies (vld). by law, a polling location can’t have a vld and a voting location manager (vlm) of the same political party. that is part of the checks and balances built into the system. one vld had been a vlm at this location, and she had a wealth of history and wisdom about running a smooth polling location. for the other vld, this was his first experience as a precinct election officer, and he jumped in with both feet to do the visible and not-so-visible things that made for a smooth operation. he reminded me a bit of myself a year ago. my first peo position was as a voting location deputy last november. the pair handled a challenging curbside voter situation where it wasn’t entirely clear if one of the voters in the car was sick. i’d be so lucky to work with them again. the last two hours of the open polls yesterday were dreadfully dull. after the excitement of the morning, we may have averaged a voter every minutes for those last two hours. everyone was ready to pack it in early and go home. (polls in ohio close at : , so counting the hour early for setup and the half an hour for tear down, this was going to be a to hour day.) over the last hour, i gave the peos little tasks to do. at one point, i said they could collect the barcode scanners attached to the ballot markers. we weren’t using them anyway because the electronic poll pads were not functional. then, in stages (as it became evident that there was no final rush of voters), they could pack up one or two machines and put away tables. our second to last voter was someone in medical scrubs that just got off their shift. i scared our last voter because she walked up to the roster table at : : . thirty seconds later, i called out that the polls are closed (as i think a vlm is required to do), and she looked at me startled. (she got to vote, of course; that’s the rule.) she was our last voter; voters in our precinct that day. then our team packed everything up as efficiently as they had worked all day. we had put away the equipment and signs, done our final counts, closed out the ballot counter, and sealed the ballot bin. at : , we were done and waving goodbye to our host facility’s office manager. one of the vld rode along with me to the board of elections to drop off the ballots, and she told me of a shortcut to get there. we were among the first reporting results for franklin county. i was home again by a quarter of —exhausted but proud. i’m so happy that i had something to do yesterday. after weeks of concern and anxiety for how the election was going to turn out, it was a welcome bit of activity to ensure the election was held safely and that voters got to have their say. it was certainly more productive than continually reloading news and election results pages. the anxiety of being put in charge of a polling location was set at ease, too. i’m proud of our polling place team and that the voters in our charge seemed pleased and confident about the process. maybe you will find inspiration here. if you voted, hopefully it felt good (whether or not the result turned out as you wanted). if you voted for the first time, congratulations and welcome to the club (be on the look-out for the next voting opportunity…likely in the spring). if being a poll worker sounded like fun, get in touch with your local board of elections (here is information about being a poll worker in franklin county). democracy is participatory. you’ve got to tune in and show up to make it happen. certificate of appreciation running an all-online conference with zoom [post removed] this is an article draft that was accidentally published. i hope to work on a final version soon. if you really want to see it, i saved a copy on the internet archive wayback machine. with gratitude for the niso ann marie cunningham service award during the inaugural niso plus meeting at the end of february, i was surprised and proud to receive the ann marie cunningham service award. todd carpenter, niso’s executive director, let me know by tweet as i was not able to attend the conference. pictured in that tweet is my co-recipient, christine stohn, who serves niso with me as the co-chair of the information delivery and interchange topic committee. this got me thinking about what niso has meant to me. as i think back on it, my activity in niso spans at least four employers and many hours of standard working group meetings, committee meetings, presentations, and ballot reviews. niso ann marie cunningham service award i did not know ms cunningham, the award’s namesake. my first job started when she was the nfais executive director in the early s, and i hadn’t been active in the profession yet. i read her brief biography on the niso website: the ann marie cunningham service award was established in to honor nfais members who routinely went above and beyond the normal call of duty to serve the organization. it is named after ann marie cunningham who, while working with abstracting and information services such as biological abstracts and the institute for scientific information (both now part of niso-member clarivate analytics), worked tirelessly as an dedicated nfais volunteer. she ultimately served as the nfais executive director from to when she died unexpectedly. niso is pleased to continue to present this award to honor a niso volunteer who has shown the same sort of commitment to serving our organization. as i searched the internet for her name, i came across the proceedings of the nfais meeting, in which ms cunningham wrote the introduction with wendy wicks. these first sentences from some of the paragraphs of that introduction are as true today as they were then: in an era of rapidly expanding network access, time and distance no longer separate people from information. much has been said about the global promise of the internet and the emerging concept of linking information highways, to some people, “free” ways. what many in the networking community, however, seem to take for granted is the availability of vital information flowing on these high-speed links. i wonder what ms cunningham of would think of the information landscape today? hypertext linking has certainly taken off, if not taken over, the networked information landscape. how that interconnectedness has improved with the adaptation of print-oriented standards and the creation of new standards that match the native capabilities of the network. in just one corner of that space, we have the adoption of pdf as a faithful print replica and html as a common tool for displaying information. in another corner, marc has morphed into a communication format that far exceeds its original purpose of encoding catalog cards; we have an explosion of purpose-built metadata schemas and always the challenge of finding common ground in tools like dublin core and schema.org. we’ve seen several generations of tools and protocols for encoding, distributing, and combining data in new ways to reach users. and still we strive to make it better…to more easily deliver a paper to its reader—a dataset to its next experimenter—an idea to be built upon by the next generation. it is that communal effort to make a better common space for ideas that drives me forward. to work in a community at the intersection of libraries, publishers, and service providers is an exciting and fulfilling place to be. i’m grateful to my employers that have given me the ability to participate while bringing the benefits of that connectedness to my organizations. i was not able to be at niso plus to accept the award in person, but i was so happy to be handed it by jason griffey of niso about a week later during the code lib conference in pittsburgh. what made that even more special was to learn that jason created it on his own d printer. thank you to the new nfais-joined-with-niso community for honoring me with this service award. tethering a ubiquity network to a mobile hotspot i saw it happen. the cable-chewing device the contractor in the neighbor’s back yard with the ditch witch trencher burying a cable. i was working outside at the patio table and just about to go into a zoom meeting. then the internet dropped out. suddenly, and with a wrenching feeling in my gut, i remembered where the feed line was buried between the house and the cable company’s pedestal in the right-of-way between the properties. yup, he had just cut it. to be fair, the utility locator service did not mark the my cable’s location, and he was working for a different cable provider than the one we use. (there are three providers in our neighborhood.) it did mean, though, that our broadband internet would be out until my provider could come and run another line. it took an hour of moping about the situation to figure out a solution, then another couple of hours to put it in place: an iphone tethered to a raspberry pi that acted as a network bridge to my home network’s unifi security gateway p. network diagram with tethered iphone a few years ago i was tired of dealing with spotty consumer internet routers and upgraded the house to unifi gear from ubiquity. rob pickering, a college comrade, had written about his experience with the gear and i was impressed. it wasn’t a cheap upgrade, but it was well worth it. (especially now with four people in the household working and schooling from home during the covid- outbreak.) the unifi security gateway has three network ports, and i was using two: one for the uplink to my cable internet provider (wan) and one for the local area network (lan) in the house. the third port can be configured as another wan uplink or as another lan port. and you can tell the security gateway to use the second wan as a failover for the first wan (or as load balancing the first wan). so that is straight forward enough, but do i get the personal hotspot on the iphone to the second wan port? that is where the raspberry pi comes in. the raspberry pi is a small computer with usb, ethernet, hdmi, and audio ports. the version i had laying around is a raspberry pi —an older model, but plenty powerful enough to be the network bridge between the iphone and the home network. the toughest part was bootstrapping the operating system packages onto the pi with only the iphone personal hotspot as the network. that is what i’m documenting here for future reference. bootstrapping the raspberry pi the raspberry pi runs its own operating system called raspbian (a debian/linux derivative) as well as more mainstream operating systems. i chose to use the ubuntu server for raspberry pi instead of raspbian because i’m more familiar with ubuntu. i tethered my macbook pro to the iphone to download the ubuntu . . lts image and follow the instructions for copying that disk image to the pi’s microsd card. that allows me to boot the pi with ubuntu and a basic set of operating system packages. the challenge: getting the required networking packages onto the pi it would have been really nice to plug the iphone into the pi with a usb-lightning cable and have it find the tethered network. that doesn’t work, though. ubuntu needs at least the usbmuxd package in order to see the tethered iphone as a network device. that package isn’t a part of the disk image download. and of course i can’t plug my pi into the home network to download it (see first paragraph of this post). my only choice was to tether the pi to the iphone over wifi with a usb network adapter. and that was a bit of ubuntu voodoo. fortunately, i found instructions on configuring ubuntu to use a wpa-protected wireless network (like the one that the iphone personal hotspot is providing). in brief: sudo -i cd /root wpa_passphrase my_ssid my_ssid_passphrase > wpa.conf screen -q wpa_supplicant -dwext -iwlan -c/root/wpa.conf <control-a> c dhclient -r dhclient wlan explanation of lines: use sudo to get a root shell change directory to root’s home use the wpa_passphrase command to create a wpa.conf file. replace my_ssid with the wireless network name provided by the iphone (your iphone’s name) and my_ssid_passphrase with the wireless network passphrase (see the “wi-fi password” field in settings -> personal hotspot). start the screen program (quietly) so we can have multiple pseudo terminals. run the wpa_supplicant command to connect to the iphone wifi hotspot. we run this the foreground so we can see the status/error messages; this program must continue running to stay connected to the wifi network. use the screen hotkey to create a new pseudo terminal. this is control-a followed by a letter c. use dhclient to clear out any dhcp network parameters use dhclient to get an ip address from the iphone over the wireless network. now i was at the point where i could install ubuntu packages. (i ran ping www.google.com to verify network connectivity.) to install the usbmuxd and network bridge packages (and their prerequisites): apt-get install usbmuxd bridge-utils if your experience is like mine, you’ll get an error back: couldn't get lock /var/lib/dpkg/lock-frontend the ubuntu pi machine is now on the network, and the automatic process to install security updates is running. that locks the ubuntu package registry until it finishes. that took about minutes for me. (i imagine this varies based on the capacity of your tethered network and the number of security updates that need to be downloaded.) i monitored the progress of the automated process with the htop command and tried the apt-get command when it finished. if you are following along, now would be a good time to skip ahead to configuring the unifi security gateway if you haven’t already set that up. turning the raspberry pi into a network bridge with all of the software packages installed, i restarted the pi to complete the update: shutdown -r now while it was rebooting, i pulled out the usb wireless adapter from the pi and plugged in the iphone’s usb cable. the pi now saw the iphone as eth , but the network did not start until i went to the iphone to say that i “trust” the computer that it is plugged into. when i did that, i ran these commands on the ubuntu pi: dhclient eth brctl addbr iphonetether brctl addif iphonetether eth eth brctl stp iphonetether on ifconfig iphonetether up explanation of lines: get an ip address from the iphone over the usb interface add a network bridge (the iphonetether is an arbitrary string; some instructions simply use br for the zero-ith bridge) add the two ethernet interfaces to the network bridge turn on the spanning tree protocol (i don’t think this is actually necessary, but it does no harm) bring up the bridge interface the bridge is now live! thanks to amitkumar pal for the hints about using the pi as a network bridge. more details about the bridge networking software is on the debian wiki. note! i'm using a hardwired keyboard/monitor to set up the raspbery pi. i've heard from someone that was using ssh to run these commands, and the ssh connection would break off at brctl addif iphonetecther eth eth configuring the unifi security gateway i have a unifi cloud key, so i could change the configuration of the unifi network with a browser. (you’ll need to know the ip address of the cloud key; hopefully you have that somewhere.) i connected to my cloud key at https:// . . . : / and clicked through the self-signed certificate warning. first i set up a second wide area network (wan—your uplink to the internet) for the iphone personal hotspot: settings -> internet -> wan networks. select “create a new network”: network name: backup wan ipv connection type: use dhcp ipv connection types: use dhcpv dns server: . . . and . . . (cloudflare’s dns servers) load balancing: failover only the last selection is key…i wanted the gateway to only use this wan interfaces as a backup to the main broadband interface. if the broadband comes back up, i want to stop using the tethered iphone! second, assign the backup wan to the lan /wan port on the security gateway (devices -> gateway -> ports -> configure interfaces): port wan /lan network: wan speed/duplex: autonegotiate apply the changes to provision the security gateway. after about seconds, the security gateway failed over from “wan iface eth ” (my broadband connection) to “wan iface eth ” (my tethered iphone through the pi bridge). these showed up as alerts in the unifi interface. performance and results so i’m pretty happy with this setup. the family has been running simultaneous zoom calls and web browsing on the home network, and the performance has been mostly normal. web pages do take a little longer to load, but whatever zoom is using to dynamically adjust its bandwidth usage is doing quite well. this is chewing through the mobile data quota pretty fast, so it isn’t something i want to do every day. knowing that this is possible, though, is a big relief. as a bonus, the iphone is staying charged via the amp power coming through the pi. tno october -------------------------------------------------------------------- t h e n e t w o r k o b s e r v e r volume , number october -------------------------------------------------------------------- this month: the gender politics of "exploring" the net strange ideas about privacy pre-employment background checks -------------------------------------------------------------------- welcome to tno ( ). this month's issue includes two articles by the editor. the first one explores the metaphor of "exploring" the internet, suggesting that the gross disorganization of the net promotes a social construction of the net as a masculine place and a neglect of the historically feminine activity of librarianship. the second article lists a batch of unfortunate arguments about informational privacy that i have encountered in my reading and travel over the last year, together with my own rebuttals against them. may these rebuttals serve you in your own privacy activism. -------------------------------------------------------------------- is the net a wilderness or a library? at the cpsr annual meeting earlier this month, karen coyle gave a rip-roaring speech that set me thinking about metaphors for using the net. karen is a library automation specialist and a cpsr activist who is active in getting women and girls involved in computing. in her speech she pointed out that, as a technology for making information available to people, compared to any real library, the internet is an amateur job. sure there's a reasonable amount of information, but it has been haphazardly collected, is almost completely disorganized, has no standard cataloguing system, and only the beginnings of a decent, uniform interface. discussing her speech with another cpsr activist, jim davis, later that evening, i suddenly connected several things that had been bothering me about the language and practice of the internet. the result was a partial answer to the difficult question, in what sense is the net "gendered"? the reason this question is difficult is that we don't want to be reductionist about it. it's clearly not true that only men use the net, or that only men find the net worthwhile, or that all women encounter more obstacles to net usage than any men do. our analysis needs to be more subtle than that. i don't claim to have a finished answer, but i do think i have one piece of it. that piece starts with the metaphor of "exploring". i've talked with several people who have tried to teach internet usage to people who aren't computer professionals, and there's one thing they all tell me: many students give up in frustration after repeatedly getting lost using "browsing" tools like gopher and mosaic. even a decent "history" menu doesn't seem to suffice. they find themselves "somewhere" in the net, don't know where they are, don't know how to find their way back there, and see no real logical connection between any one place and any other. note the curious collision of metaphors here. the most common use of "browsing" is in regard to libraries: wandering down the aisles in known sections, seeing what books might be on the shelf, just in case something interesting comes up. the word is also often applied to the analogous activity in bookstores. applied to a tool like mosaic or gopher, though, the metaphor is precisely backward: libraries and bookstores have clear ordering systems that are visible in the spatial layout of the building, and "browsing" suggests that you haven't got any very specific goal in your looking-around. but no such visible ordering system is found in gopherspace or the worldwide web, and people often need to use those tools to actually find something that has certain properties. but "browsing" isn't really the generative metaphor that's at work in systems like gopher or the web. the generative metaphor -- the metaphor that generates new meanings and new language for the activity of using the tools -- is "exploring". one uses these tools to "explore" the net. think what *this* metaphor entails. one normally explores alone, or with a small "party" with a definite organization. one has a location at any given time, yet one does not normally know with any precision what that location is. one is in strange territory, far from home, and the assumption is that few others like oneself have been there before -- at least, any markings on rocks or trees that are recognizable as being from one's own kind are rare and important signs. one is normally in danger, or at least in grave uncertainty, and one must learn to tolerate continual fear. (see my discussion of the related metaphor of the "electronic frontier", employed by the otherwise laudable electronic frontier foundation, in tno ( ).) all of this does, in fact, describe the experience of many new users of tools like gopher and mosaic -- and many other such tools as well. maybe they get used to it, and maybe they don't. the real question, though, is: should they *have* to get used to it? clearly not. yet for many people, "exploring" is close to defining the experience of the net. it is clearly a gendered metaphor: it has historically been a male activity, and it comes down to us saturated with a long list of meanings related to things like colonial expansion, experiences of otherness, and scientific discovery. explorers often die, and often fail, and the ones that do neither are heroes and role models. this whole complex of meanings and feelings and strivings is going to appeal to those who have been acculturated into a particular male-marked system of meanings, and it is not going to offer a great deal of meaning to anyone who has not. the use of prestigious artifacts like computers is inevitably tied up with the construction of personal identity, and "exploration" tools offer a great deal more traction in this process to historically male cultural norms than to female ones. this sort of thing cuts particularly hard in middle schools, when kids between and establish both their gendered adult social identities and their formative skills and relationships with technology. once the computer room gets defined as a "boys' place", it's just about all over for girls. teachers abet this process when they reinforce the pointlessly masculine metaphors of exploration through their lessons in the computer lab. if the net necessarily worked this way, or if it worked this way for a good reason, then we'd have a real problem here. but, as karen coyle points out, that's not the case. the net right now really is an amateur job, and perhaps it's not surprising that the missing element is something that historically has been strongly coded as a female activity, namely librarianship: ordering, marking, and cataloguing information so that people can actually find it and use it, and staffing the desk where people go for help with this process. are the net's dysfunctionalities actually central to the gendered experience of using the net? and what if they are? then maybe those heroic browsing tools should be left for the second course on using the internet, and maybe the far more useful skills of communicating on the net should occupy the first course. i don't just mean technical skills here -- i also mean the skills of composing clear texts, reading with an awareness of different possible interpretations, recognizing and resolving conflicts, asking for help without feeling powerless, organizing people to get things done, and embracing the diversity of the backgrounds and experiences of others. just sending and receiving messages on the computer is of little use in itself if these deeper human lessons are not taught and learned as well. electronic mail interaction is a good place to learn these skills because the e-mail texts can be saved, inspected, discussed, thought over, revised, presented as models, collaborated upon, and so forth. what's more, the motions of typing slow the process down enough that impulsive reaction becomes more difficult and thoughtful reflection becomes more likely. but i think that an ever deeper question lurks behind the issue of network metaphors. where is the reference desk on the net? some systems (like the well) have schemes where you can "shout" for help to the other users and get technical assistance, but these schemes are few, not standardized, often unreliable, and usually limited to technical matters. why is the net developing without leaving a place for the important role played by real, live human librarians in libraries? the librarian is the person who knows what information is out there, where and how to find it, which tools work best for searching, which reference works are best for what purposes, how the special collections are organized, who is the expert on what, and so forth. a library isn't just a bunch of books: it's a human system that's set up to help connect people to information. why isn't the net like this? why does the space you "explore" in gopher or mosaic look empty even when it's full of other people? why isn't there a mechanism for asking for help? it wouldn't be hard to organize. just as libraries use networks now to share the costs of cataloguing books through organizations like the cooperative cataloguing council, they could also share the costs of on-line professional librarianship assistance in order to provide -hour coverage to participating institutions. this process would need software support as well: when you need help, you'd type in a text message (or just enter a voice recording) explaining what you're trying to do. you would be automatically connected to a helper, and a snapshot of your "browser" session would appear on their screen. a simple expert system would guess at which helper would be best suited to your question, based on their areas of expertise. it's important to do this on the library model and not just on a commercial model. librarians have no conflicts of interest that might influence them to steer you toward particular databases, and they're not paid by the hour so they have no interest in prolonging the interaction unnecessarily. they do, however, have an organizational interest in customers being happy with the library. they also have long experience in balancing the interests of the organization they're paid to serve (for example, a particular university) with the larger public interest. what would the internet's tools be like if their designers routinely thought about the social relationships of their use? it's a hard question, precisely because of the one-user-one-tool model of lonely exploration that still routinely goes into the design of such systems. the net opens up a whole world of possible new ways of connecting people together, but we'll squander its potential until we appreciate the role of the helping professions, and more generally the thoroughly social nature of the activities that are, we are told, rapidly migrating into the chilly nighttime of cyberspace. -------------------------------------------------------------------- some strange ideas about privacy. the emergence of new digital technologies is opening up a new world of privacy issues. along the way, our ideas about what privacy even *is* will presumably be rethought and refought in a variety of ways. tno ( ) has already looked at some new concepts of privacy that might be required to understand the use of computers to track human activities, and tno ( ) has taken a quick look at one attempt to define "privacy" in such a way that companies protect your privacy by accumulating massive amounts of information on you. here i collect some strange ideas about privacy that i have encountered in my reading and traveling over the last year. i have not tried to document any of them in a scholarly way, and the bulleted quotations represent composite or abbreviated versions of the lines i have heard. my purpose is not to make accusations against the people who use such lines. many people honestly believe them, and many others are just passing along half-thought-out ideas that they've heard elsewhere. instead, i want to help you to recognize these lines when you encounter them -- and equip you to argue back against them when the situation calls for it. * "we've lost so much of our privacy anyway." this line plays upon the dire rhetoric of privacy campaigners and somehow turns it on its head: we've already lost our privacy, so further steps to protect it are futile. i hear this a lot from technical people when i recommend that they employ privacy protections in their newly designed systems. it's important to spread the word about the routine invasions of our privacy, but it's also important to remind everyone of how much privacy we have left to lose. you can still drive pretty much anywhere you like without leaving records behind. you can still pay for most things in cash. hardly anyone has to report their sexual activities to anyone else -- or whether you eat fattening foods, or who your friends are, or your religion. you don't need an internal passport to travel in most countries, and so you don't have to register your movements. if you live in the united states then you enjoy a fair amount of protection under the legislation such as the fair credit reporting act and the electronic communications privacy act. we can lose these things, and we *will* lose them, unless we ensure that each new generation of technology has the privacy protections it needs. * "privacy is an obsolete victorian hang-up." the basic idea is that we'll soon lose all control over our personal information, and after some hand-wringing we'll just get used to it. protecting our personal information is equated with prudishness, obsessional modesty, cultural embarrassment, and unliberated secrecy. people who believe such things are, in my experience, invariably either ignorant of or in denial about the realities of social oppression. let's send them to live in a place where everybody knows everything about you for a while. there's a world of difference about being voluntarily "open", on one's own terms, about one's liberated sexuality and experiencing mandatory invasion and publicity of the less happy details of one's sexual life. the same thing goes for your phone records, where you've been driving, what you ate for dinner, and a great deal else. * "ideas about privacy are culturally specific and it is thus impossible to define privacy in the law without bias." this argument is found often in the american legal literature, principally among people whose political commitments would not otherwise dispose them to heights of cultural sensitivity. it is true that certain ideas about privacy are culturally specific -- oscar gandy, for example, reports that african-americans find unsolicited telemarketing calls to be less invasive than do their fellow citizens of european descent. but this sort of argument quickly turns obnoxious as the issues become more serious. amnesty international is not based on any sort of relativism about torture, and neither should privacy international be overly impressed by governments claiming that their culture is compatible with the universal tracking of citizens, or that objections to such things represent cultural bias. the argument is especially specious with relation to tort law, the area where it is most commonly made, since tort law arises in large part through the rational reconstruction of the decisions of juries in particular cases. if you throw out concepts of privacy on such grounds then you must also throw out concepts like contract as well. * "we have strong security on our data." in my experience, this argument is common even among people who regard themselves as privacy activists. it arises through a widespread confusion between privacy and security. privacy and security are very different things. informational privacy means that i get to control my personal information. data security means that *someone else* in an organization somewhere gets to control my personal information by, among other things, withholding access from those outside the organization. of course, this organization may have my best interests in mind, and may even seek my approval before doing anything unusual with my information. the problem arises when the organization itself wants to invade my privacy, for example by making secondary uses of information about its transactions with me. those secondary uses of the data can be as secure as you like, but they are still invasions of my privacy. * "national identity cards protect privacy by improving authentication and data security." it might indeed be argued that my privacy is not protected if individuals in a society don't have enough of a standardized institutional identity to authenticate themselves when they make claims on organizations (for example, when buying on credit). but the holes in current mechanisms for officially conferring identity can be patched to a major extent without resorting to universal identification cards. state departments of motor vehicles in the united states, for example, need to institute much better policies at one of the notorious weak points in the system, namely the issuance of replacement drivers' licenses. it would accomplish a lot, i think, simply to mail out a letter about the new license to all known addresses of the legitimate license holder. * "informational privacy can be protected by converting it into a property right." this one has suddenly become extremely common, as articulated for example by anne branscomb in her book "who owns information?". additionally, many people have begun to spin elaborate scenarios about the future market in personal information, in which i can withhold my personal information unless the price is right. these scenarios might hold some value for certain purposes, but they have little to do with protecting informational privacy. the crucial issue is bargaining power. the organizations that gobble your personal information today have computer systems that, by their very design, profoundly presuppose that the organization will capture information about you and store it under a unique identifier. they mostly capture this information with impunity because you can do little to stop them. if your personal information were suddenly redefined by the law as personal property tomorrow, assuming that the lawyers figured out what this idea even *means*, then i predict that, the day after tomorrow, every adherence contract (that's legalese for "take it or leave it", the prototype being those preprinted contracts for credit cards and rental cars and mortgages that are covered with fine print that the firm's local representative has no authority to modify or delete) in the affected jurisdiction would suddenly sprout a new clause issuing to the organization an unrestricted license (or some such legal entity) over the use of your personal information. you can refuse, of course, but you'll be in precisely the same position that you are today: take it or leave it. the widespread belief to the contrary reflects a downright magical belief in the efficacy of property rights. establishing property rights in your personal information might actually be a good idea, but it's not nearly sufficient. what's really needed is machinery that establishes parity of bargaining power between individuals and organizations -- the informational equivalent of unions or cooperatives that can bargain as a unit for better terms with large organizations. that machinery most likely doesn't need property rights to be defined over personal information, but maybe it would make things clearer. that's the only real argument i can find for the idea, and it's not a very strong one. * "we have to balance privacy against industry concerns." this is probably the weakest of these arguments. it is also probably the most common in administrative hearings at the federal communications commission and the like. it reflects a situation in which a bureaucrat is faced with privacy activists on one side and industry lobbyists on the other side, and so they are forced to construct the notion of a "balance" between the two sides' arguments. the bureaucrats will profess themselves impressed by the economic benefits of the large new industry said to be in the offing. these benefits are often framed in terms of "wealth creation", without much consideration of whether this wealth will be delivered to the people from whom it was extracted. but the arguments just don't compare. privacy is an individual right, not an abstract social good. balancing privacy against profit is like balancing the admitted evils of murder against the creation of wealth through the trade in body parts for transplants. it simply does not work that way. * "privacy paranoids want to turn back the technological clock." beware any attempt to identify privacy invasion with technical progress. it is true and important that routine and rapidly expanding privacy invasion is implicit in traditional methods of computer system design, but plenty of technical design methods exist to protect privacy, especially using cryptography. this kind of argument has been used with particular force in the case of caller number id (aka caller id, or cnid). it is well known by now that cnid promises a thousand applications at the intersection between the world of telephones and the world of computers. privacy advocates are upset about cnid because industry keeps promoting rules that make it difficult for people to "block" their lines, thus preventing their phone number from being sent out digitally except when they explicitly ask for it to be sent. proponents of industry's view have gone to great lengths, though, to define things in terms of "pro-cnid" versus "anti-cnid" camps, and i have found myself that it takes great determination to stay away from this terminology. as soon as any kind of technological debate get defined as "pro-" versus "anti-", whole layers of rhetoric start cutting in: they're luddites! but it doesn't work that way. most technologies worth having can be designed to provide inherent privacy protections -- not just data security (see above), but convenient, iron-clad mechanisms for opting out or for participating without having one's information captured and cross-indexed by a universal identifier. i'm not normally inclined to advocate technical fixes, but when it comes to information technology and privacy, i actually do think that they're the only answer that can stick. -------------------------------------------------------------------- this month's recommendations. jack a. gottschalk, crisis response: inside stories on managing image under siege, detroit: visible ink, . a book by and for pr people, a couple dozen case studies of crisis management written by the pr people who were on the front lines. some of them derive from cases, like the tylenol poisoning, where a more or less faultless company did more or less the right thing, and others record the good clean fun of corporations fighting with one another over billion-dollar court cases. but many others are represented as well, all written by people whose profession is the rationalization of egregious conduct. the chapter about the isocyanate leak at union carbide's bhopal plant is in this category -- a cornucopia of special pleading that is worth the price of the book. colin j. bennett, regulating privacy: data protection and public policy in europe and the united states, ithaca: cornell university press, . an intelligent study of the politics of privacy in several countries. bennett is a political scientist who uses privacy as the occasion for investigating general questions of how issues get defined and negotiated within societies. his book sets standards for intellectual seriousness and scholarly rigor in research on privacy policy. democratic culture is the newsletter of teachers for a democratic culture, po box , evanston il , jkw@midway.uchicago.edu. tdc started out as a liberal academics' answer to the "political correctness" craze started by the likes of dinesh d'souza. its newsletter, though, has grown into an interesting, politically diverse, and unusually high-quality discussion of the complex realities of intellectual freedom. you can sign up for $ if you can afford it, $ if you can't, and $ if you're a student or have a low income. -------------------------------------------------------------------- company of the month. this month's company is: cdb infotek six hutton centre drive santa ana, california ( ) - ( ) - cdb infotek is one of those companies you keep hearing about that will look up all kinds of personal information about you for a fee. one large market for this information is in pre-employment background checks. their price list is fascinating. it may or may not be reassuring that a nationwide felony search is $ . consumer credit reports are $ , faa aircraft ownership searches are $ , and registered voter profiles are $ . motor vehicle ownership searches by name vary between $ and $ by state. real property ownership searches run between $ and $ per state. all manner of superior court records are available for usually $ to $ per court (e.g., san diego county divorce court searches are $ . ). the new subscriber fee is $ plus $ per month. i find it comforting that these prices are all so high. just think what the world will be like when they drop by a factor of twenty or fifty. -------------------------------------------------------------------- follow-up. the new south polar times, an amusing diary of life among the scientists at the south pole, is available on the web at http:// . . . /nspt/nspthomepage.html chris mays has issued a new edition of his frequently asked questions on california electronic government information. the url is http://www.cpsr.org/cpsr/states/california/cal_gov_info_faq.html you can also view the ascii text by gopher at cpsr. host=gopher.cpsr.org port= path= /cpsr/states/california/ .cal_gov_info_faq -------------------------------------------------------------------- phil agre, editor pagre@ucla.edu department of information studies university of california, los angeles + ( ) - los angeles, california - fax - usa -------------------------------------------------------------------- copyright by the editor. you may forward this issue of the network observer electronically to anyone for any non-commercial purpose. comments and suggestions are always appreciated. -------------------------------------------------------------------- go back to the top of the file planet eric lease morgan home alex catalogue serials blog musings planet sandbox writings catholic portal comments on: dh @ notre dame life of a librarian mini-musings musings readings water collection about this planet timeline view august , life of a librarian searching project gutenberg at the distant reader the venerable project gutenberg is a collection of about , transcribed editions of classic literature in the public domain, mostly from the western cannon. a subset of about , project gutenberg items has been cached locally, indexed, and made available through a website called the distant reader. the index is freely for anybody and anywhere to use. this blog posting describes how to query the index. the index is rooted in a technology called solr, a very popular indexing tool. the index supports simple searching, phrase searching, wildcard searches, fielded searching, boolean logic, and nested queries. each of these techniques are described below: simple searches – enter any words you desire, and you will most likely get results. in this regard, it is difficult to break the search engine. phrase searches – enclose query terms in double-quote marks to search the query as a phrase. examples include: "tom sawyer", "little country schoolhouse", and "medieval europe". wildcard searches – append an asterisk (*) to any non-phrase query to perform a stemming operation on the given query. for example, the query potato* will return results including the words potato and potatoes. fielded searches – the index has many different fields. the most important include: author, title, subject, and classification. to limit a query to a specific field, prefix the query with the name of the field and a colon (:). examples include: title:mississippi, author:plato, or subject:knowledge. boolean logic – queries can be combined with three boolean operators: ) and, ) or, or ) not. the use of and creates the intersection of two queries. the use of or creates the union of two queries. the use of not creates the negation of the second query. the boolean operators are case-sensitive. examples include: love and author:plato, love or affection, and love not war. nested queries – boolean logic queries can be nested to return more sophisticated sets of items; nesting allows you to override the way rudimentary boolean operations get combined. use matching parentheses (()) to create nested queries. an example includes (love not war) and (justice and honor) and (classification:bx or subject:"spiritual life"). of all the different types of queries, nested queries will probably give you the most grief. becase this index is a full text index on a wide variety of topics, you will probably need to exploit the query language to create truly meaningful results. by eric lease morgan at august , : pm july , life of a librarian searching cord- at the distant reader this blog posting documents the query syntax for an index of scientific journal articles called cord- . cord- is a data set of scientific journal articles on the topic of covid- . as of this writing, it includes more than , items. this data set has been harvested, pre-processed, indexed, and made available as a part of the distant reader. access to the index is freely available to anybody and everybody. the index is rooted in a technology called solr, a very popular indexing tool. the index supports simple searching, phrase searching, wildcard searches, fielded searching, boolean logic, and nested queries. each of these techniques are described below: simple searches – enter any words you desire, and you will most likely get results. in this regard, it is difficult to break the search engine. phrase searches – enclose query terms in double-quote marks to search the query as a phrase. examples include: "waste water", "circulating disease", and "acute respiratory syndrome". wildcard searches – append an asterisk (*) to any non-phrase query to perform a stemming operation on the given query. for example, the query virus* will return results including the words virus and viruses. fielded searches – the index has many different fields. the most important include: authors, title, year, journal, abstract, and keywords. to limit a query to a specific field, prefix the query with the name of the field and a colon (:). examples include: title:disease, abstract:"cardiovascular disease", or year: . of special note is the keywords field. keywords are sets of statistically significant and computer-selected terms akin to traditional library subject headings. the use of the keywords field is a very efficient way to create a small set of very relevant articles. examples include: keywords:mrna, keywords:ribosome, or keywords:china. boolean logic – queries can be combined with three boolean operators: ) and, ) or, or ) not. the use of and creates the intersection of two queries. the use of or creates the union of two queries. the use of not creates the negation of the second query. the boolean operators are case-sensitive. examples include: covid and title:sars, abstract:cat* or abstract:dog*, and abstract:cat* not abstract:dog* nested queries – boolean logic queries can be nested to return more sophisticated sets of articles; nesting allows you to override the way rudimentary boolean operations get combined. use matching parentheses (()) to create nested queries. an example includes ((covid and title:sars) or abstract:cat* or abstract:dog*) not year: . of all the different types of queries, nested queries will probably give you the most grief. by eric lease morgan at july , : pm july , life of a librarian distant reader workshop hands-on activities this is a small set of hands-on activities presented for the keystone digital humanities annual meeting. the intent of the activities is to familiarize participants with the use and creation of distant reader study carrels. this page is also available as pdf file designed for printing. introduction the distant reader is a tool for reading. given an almost arbitrary amount of unstructured data (text), the reader creates a corpus, applies text mining against the corpus, and returns a structured data set amenable to analysis (“reading”) by students, researchers, scholars, and computers. the data sets created by the reader are called “study carrels”. they contain a cache of the original input, plain text versions of the same, many different tab-delimited files enumerating textual features, a relational database file, and a number of narrative reports summarizing the whole. given this set of information, it is easy to answer all sorts of questions that would have previously been very time consuming to address. many of these questions are akin to newspaper reporter questions: who, what, when, where, how, and how many. using more sophisticated techniques, the reader can help you elucidate on a corpus’s aboutness, plot themes over authors and time, create maps, create timelines, or even answer sublime questions such as, “what are some definitions of love, and how did the writings of st. augustine and jean-jacques rousseau compare to those definitions?” the distant reader and its library of study carrels are located at: https://distantreader.org https://distantreader.org/catalog activity # : compare & contrast two study carrels these tasks introduce you to the nature of study carrels: from the library, identify two study carrels of interest, and call them carrel a and carrel b. don’t think too hard about your selections. read carrel a, and answer the following three questions: ) how many items are in the carrel, ) if you were to describe the content of the carrel in one sentence, then what might that sentence be, and ) what are some of the carrel’s bigrams that you find interesting and why. read carrel b, and answer the same three questions. answer the question, “how are carrels a and b similar and different?” activity # : become familiar with the content of a study carrel these tasks stress the structured and consistent nature of study carrels: download and uncompress both carrel a and carrel b. count the number of items (files and directories) at the root of carrel a. count the number of items (files and directories) at the root of carrel b. answer the question, “what is the difference between the two counts?”. what can you infer from the answer? open any of the items in the directory/folder named “cache”, and all of the files there ought to be exact duplicates of the original inputs, even if they are html documents. in this way, the reader implements aspects of preservation. a la lockss, “lots of copies keep stuff safe.” from the cache directory, identify an item of interest; pick any document-like file, and don’t think too hard about your selection. given the name of the file from the previous step, open the file with the similar name but located in the folder/directory named “txt”, and you ought to see a plain text version of the original file. the reader uses these plain text files as input for its text mining processes. given the name of the file from the previous step, use your favorite spreadsheet program to open the similarly named file but located in the folder/directory named “pos”. all files in the pos directory are tab-delimited files, and they can be opened in your spreadsheet program. i promise. once opened, you ought to see a list of each and every token (“word”) found in the original document as well as the tokens’ lemma and part-of-speech values. given this type of information, what sorts of questions do you think you can answer? open the file named “manifest.htm” found at the root of the study carrel, and once opened you will see an enumeration and description of all the folders/files in any given carrel. what types of files exist in a carrel, and what sorts of questions can you address if given such files? activity # : create study carrels anybody can create study carrels, there are many ways to do so, and here are two: go to https://distantreader.org/create/url carrel, and you may need to go through orcid authentication along the way. give your carrel a one-word name. enter a url of your choosing. your home page, your institutional home page, or the home page of a wikipedia article are good candidates. click the create button, and the reader will begin to do its work. create an empty folder/directory on your computer. identify three or four pdf files on your computer, and copy them to the newly created directory. compress (zip) the directory. go to https://distantreader.org/create/zip carrel, and you may need to go through orcid authentication along the way. give your carrel a different one-word name. select the .zip file you just created. click the create button, and the reader will begin to do its work. wait patiently, and along the way the reader will inform you of its progress. depending on many factors, your carrels will be completed in as little as two minutes or as long as an hour. finally, repeat activities # and # with your newly created study carrels. extra credit activities the following activities outline how to use a number of cross-platform desktop/gui applications to read study carrels: print any document found in the cache directory and use the traditional reading process to… read it. consider using an active reading process by annotating passages with your pen or pencil. download wordle from the wayback machine, a fine visualization tool. open any document found in the txt directory, and copy all of its content to the clipboard. open wordle, paste in the text, and create a tag cloud. download antconc, a cross-platform concordance application. use antconc to open one more more files found in the txt directory, and then use antconc to find snippets of text containing the bigrams identified in activity # . to increase precision, configure antconc to use the stopword list found in any carrel at etc/stopwords.txt. download openrefine, a robust data cleaning and analysis program. use openrefine to open one or more of the files in the folder/directory named “ent”. (these files enumerate named-entities found in your carrel.) use openrefine to first clean the entities, and then use it to count & tabulate things like the people, places, and organizations identified in the carrel. repeat this process for any of the files found in the directories named “adr”, “pos”, “wrd”, or “urls”. extra extra credit activities as sets of structured data, the content of study carrels can be computed against. in other words, programs can be written in python, r, java, bash, etc. which open up study carrel files, manipulate the content in ways of your own design, and output knowledge. for example, you could open up the named entity files, select the entities of type person, look up those people in wikidata, extract their birthdates and death dates, and finally create a timeline illustrating who was mentioned in a carrel and when they lived. the same thing could be done for entities of type gre (place), and a map could be output. a fledgling set of jupyter notebooks and command-line tools have been created just for these sorts of purposes, and you can find them on github: https://github.com/ericleasemorgan/reader-notebooks https://github.com/ericleasemorgan/reader-toolbox-classic/ https://github.com/ericleasemorgan/reader-toolbox every study carrel includes an sqlite relational database file (etc/reader.db). the database file includes all the information from all tab-delimited files (named-entities, parts-of-speech, keywords, bibliographics, etc.). given this database, a person can either query the database from the command-line, write a program to do so, or use gui tools like db browser for sqlite or datasette. the result of such queries can be elaborate if-then statement such as “find all keywords from documents dated less than y” or “find all documents, and output them in a given citation style.” take a gander at the sql file named “etc/queries.sql” to learn how the database is structured. it will give you a head start. summary given an almost arbitrary set of unstructured data (text), the distant reader outputs sets of structured data known as “study carrels”. the content of study carrels can be consumed using the traditional reading process, through the use of any number of desktop/gui applications, or programmatically. this document outlined each of these techniques. embrace information overload. use the distant reader. by eric lease morgan at july , : pm july , life of a librarian distant reader embrace information overload. use the distant reader. by eric lease morgan at july , : pm may , life of a librarian ptpbio and the reader [the following missive was written via an email message to a former colleague, and it is a gentle introduction to distant reader “study carrels”. –elm] on another note, i see you help edit a journal (ptpbio), and i used it as a case-study for a thing i call the distant reader. the distant reader takes an arbitrary amount of text as input, does text mining and natural language processing against it, saves the result as a set of structured data, writes a few reports, and packages the whole thing into a zip file. the zip file is really a data set, and distant reader data sets are affectionately called “study carrels”. i took the liberty of applying the reader process to ptpbio, and the result has manifested itself in a number of ways. let me enumerate them. first, there is the cache of the original content; https://library.distantreader.org/backroom/journal-ptpbio/cache/ next, there are plain text versions of the cached items. these files are used for text mining, etc.: https://library.distantreader.org/backroom/journal-ptpbio/txt/ the reader does many different things against the plain text. for example, the reader enumerates and describes each and every token (“word”) in each and every document. the descriptions include the word, its lemma, is part-of-speech, and its location in the corpus. each plain text file is really a tab-delimited file easily importable into your favorite spreadsheet or database program: https://library.distantreader.org/backroom/journal-ptpbio/pos/ similar sets of files are created for named entities, urls, email addresses, and statistically significant keywords: https://library.distantreader.org/backroom/journal-ptpbio/ent/ https://library.distantreader.org/backroom/journal-ptpbio/adr/ https://library.distantreader.org/backroom/journal-ptpbio/urls/ https://library.distantreader.org/backroom/journal-ptpbio/wrd/ all of this data is distilled into a (sqlite) database file, and various reports are run against the database. for example, a very simple and rudimentary report as well as a more verbose html report: https://library.distantreader.org/backroom/journal-ptpbio/standard-output.txt https://library.distantreader.org/backroom/journal-ptpbio/index.htm all of this data is stored in a single directory: https://library.distantreader.org/backroom/journal-ptpbio/ finally, the whole thing is zipped up and available for downloading. what is cool about the download is that it is % functional on your desktop as it is on the ‘net. the study carrels does not require the ‘net to be operational; study carrels are manifested as plain text files, are stand-alone items, and will endure the test of time: https://library.distantreader.org/backroom/journal-ptpbio/study-carrel.zip “but wait. there’s more!” it is not possible for me to create a web-based interface empowering students, researchers, or scholars to answer any given research question. there are too many questions. on the other hand, since the study carrels are “structured”, one can write more sophisticated applications against the data. that is what the reader toolbox and reader (jupyter) notebooks are for. using the toolbox and/or the notebooks the student, researcher, or scholar can do all sorts of things: download carrels from the reader’s library extract ngrams do concordancing do topic modeling create a full text index output all sentences containing a given word find all people, use the ‘net to get birth date and death dates, and create a timeline find all places, use the ‘net to get locations, and plot a map articulate an “interesting” idea, and illustrate how that idea ebbed & flowed over time play hangman, do a cross-word puzzle, or plat a hidden word search game finally, the reader is by no means perfect. “software is never done. if it were, then it would be called ‘hardware’.” ironically though, the hard part about the reader is not interpreting the result. the hard part is two other things. first, in order to use the reader effectively, a person needs to have a (research) question in mind. the question can be as simple as “what are the characteristics of the given corpus?” or, they can be as sublime as “how does st. augustine define love, and how does his definition differ from rousseau’s?” just as difficult it the creation of the corpus to begin with. for example, i needed to get just the pdf versions of your journal, but the website (understandably) is covered with about pages, navigation pages, etc. listing the urls of the pdf files was not difficult, but it was a bit tedious. again, that is not your fault. in fact, your site was (relatively) easy. some places seem to make it impossible to get to the content. (sometimes i think the internet is really one huge advertisement.) okay. that was plenty! your journal was a good use-case. thank you for the fodder. oh, by the way, the reader is located at https://distantreader.org, and it available for use by anybody in the world. by eric lease morgan at may , : pm date created: - - date updated: - - url: http://infomotions.com/ equinox open library initiative skip to content facebook-f twitter linkedin-in vimeo about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral subjectsplus services consulting workflow and advanced ils consultation data services web design it consultation migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations resource library collaborate communities evergreen koha coral subjectsplus equinox grants connect sales support donate contact us × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral subjectsplus services consulting workflow and advanced ils consultation data services web design it consultation migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations resource library collaborate communities evergreen koha coral subjectsplus equinox grants connect sales support donate contact us about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media equinox provides innovative open source software for libraries of all types. extraordinary service. exceptional value. as a (c)( ) nonprofit corporation, equinox supports library automation by investing in open source software and providing technology services for libraries. products services ask us how » about equinox » news & events press release center for khmer studies goes live on koha ils learn more » press release vermont jazz center goes live on koha ils learn more » press release equinox open library initiative presents “developing open source tools to support libraries during covid- ” at the ala annual conference learn more » products & services koha is the first free and open source library automation package. equinox’s team includes some of koha’s core developers. learn more evergreen is a unique and powerful open source ils designed to support large, dispersed, and multi-tiered library networks. learn more equinox provides ongoing educational opportunities through equinoxedu, including live webinars, workshops, and online resources. learn more fulfillment is an open source interlibrary loan management system. fulfillment can be used alongside or in connection with any integrated library system.     learn more coral is an open source electronic resources management system. its interoperable modules allow libraries to streamline their management of electronic resources. learn more customized for your library consulting migration development hosting & support training & education why choose equinox? equinox is different from most ils providers. as a non-profit organization, our guiding principle is to provide a transparent, open software development process, and we release all code developed to publicly available repositories. equinox is experienced with serving libraries of all types in the united states and internationally. we’ve supported and migrated libraries of all sizes, from single library sites to full statewide implementations. equinox is technically proficient, with skilled project managers, software developers, and data services staff ready to assist you. we’ve helped libraries automating for the first time and those migrating from legacy ils systems. equinox knows libraries. more than fifty percent of our team are professional librarians with direct experience working in academic, government, public and special libraries. we understand the context and ecosystem of library software. sign up today for news & updates! please enable javascript in your browser to complete this form. email * name *first last organization i'd like to hear more about: koha evergreen equinoxedu other please describe: submit working with equinox has been like night and day. it's amazing to have a system so accessible to our patrons and easy to use. it has super-charged our library lending power! brooke matsonexecutive director, spark central equinox open library initiative hosts evergreen for the sclends library consortium. their technical support has been both prompt, responsive, and professional in reacting to our support requests during covid- . they have been a valuable consortium partner in meeting the needs of the member libraries and their patrons. chris yatessouth carolina state library working with equinox was great! they were able to migrate our entire consortium with no down time during working hours. the equinox team went the extra mile in helping missouri evergreen. colleen knightmissouri evergreen previous next twitter equinox olifollow equinox oli@equinoxoli· aug the latest from @ala_acrl: value of academic libraries committee presents survey results for covid- protocols for academic libraries. read more: https://bit.ly/ fcdcpl #covid #libraries reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug take a look at the tools we developed over the last year to assist libraries in responding to covid- . watch our @alalibrary conference presentation: https://splus.equinoxoli.org/subjects/equinox_covid tools reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug say goodbye to your spreadsheets and hello to #coral, an open source solution for managing electronic resources! the next equinoxedu: spotlight on coral erm is august , - pm edt - free and open. register: https://bit.ly/ fxqpma #equinoxedu #equinox @coral_erm #opensource reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug save the date! #chatopens is / w/ guest moderator @xsong discussing #electronicresourcemanagement, #opensource, and the #coral community. @coral_erm #oss #opensource #librarytech #tech reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug the inaugural liblearnx is jan - w/ @ala_acrl & @alalibrary. call for proposals now open: https://bit.ly/ r fdw #librarylife #education reply on twitter retweet on twitter like on twitter twitter events open source twitter chat with guest moderator xiaoyan song #chatopens event august , ( - pm et). join us on twitter with the hashtag #chatopens as we discuss coral erm and electronic resource management with guest moderator xiaoyan song, electronic resources librarian read more equinoxedu: spotlight on coral erm event august , ( - pm et). say goodbye to your spreadsheets and hello to an open source solution for managing electronic resources! join us to learn how coral erm can help read more open source twitter chat with andrea buntz neiman #chatopens event july , ( - pm et).  join us on twitter with the hashtag #chatopens as we discuss fulfillment with development project manager, andrea buntz neiman. read more equinox open library initiative equinox open library initiative inc. is a (c) corporation devoted to the support of open source software for public libraries, academic libraries, school libraries, and special libraries. as the successor to equinox software, inc., equinox provides exceptional service and technical expertise delivered by experienced librarians and technical staff. equinox offers affordable, customized consulting services, software development, hosting, training, and technology support for libraries of all sizes and types. connect please enable javascript in your browser to complete this form. email * submit facebook-f twitter linkedin-in vimeo contact us info@equinoxoli.org .open.ils ( . . ) + . . . po box norcross, ga copyright © – equinox open library initiative. all rights reserved. privacy policy  |   terms of use  |   equinox library services canada  |   site map skip to content open toolbar accessibility tools increase text decrease text grayscale high contrast negative contrast light background links underline readable font reset none none none ted lawless ted lawless i'm ted lawless, an application developer based in ann arbor, mi working in higher education. i post brief articles or technical notes from time to time about working with metadata, web apis and data management tools. see the list below. i've also compiled a list of presentations and projects that i've been involved with. if any of this is of interest to you, please feel free to contact me via email (lawlesst at gmail), github , linkedin, or twitter. posts automatically extracting keyphrases from text - - datasette hosting costs - - connecting python's rdflib to aws neptune - - usable sample researcher profile data - - exploring years of the new yorker fiction podcast with wikidata - - now publishing complete lahman baseball database with datasette - - publishing the lahman baseball database with datasette - - sparql to pandas dataframes - - querying wikidata to identify globally famous baseball players - - python etl and json-ld - - see a full list of posts or the rss feed. ted lawless, lawlesst at gmail github linkedin twitter ptsefton.com toggle navigation ptsefton.com home cv archives - - : fiir data management; findable inaccessible interoperable and reusable? - - : arkisto: a repository based platform for managing all kinds of research data - - : research object crate (ro-crate) update - - : infrastructure and what do we really want for dmps - - : what did you do in the lockdowns pt? part - music videos - - : fair data management; it's a lifestyle not a lifecycle - - : research data management looking outward from it - - : redundant. - - : an open, composable standards–based research eresearch platform: arkisto - - : you won't believe this shocking semantic web trick i use to avoid publishing my own ontologies! will i end up going to hell for this? - - : eresearch australasia trip report - - : fair simple scalable static research data repository looking for more? see the archive. categories arkisto platform data packaging standards datacrate datacrate, repositories, eresearch eresearch file data capture housekeeping how to jiscpub misc music repositories research data management scholarlyhtml word processing links work play twitter: @ptsefton photos © peter (petie) sefton · powered by pelican-bootstrap , pelican, bootstrap back to top core news – find community. share expertise. enhance library careers. skip to content twitter instagram youtube website core news find community. share expertise. enhance library careers. home about become a blog contributor core calendar get involved posts new core jobs: august , posted on august , august , leave a comment on new core jobs: august , browse new job openings on the core jobs site outreach & education librarian, university of maryland, baltimore – health sciences & human services library, baltimore, md data management librarian, university of maryland, baltimore – health sciences & human services library, baltimore, md technical services coordinator (pdf), northeastern illinois university, chicago, il associate university librarian for digital strategies, university at buffalo, buffalo, ny associate university librarian for research, collections & outreach, university at buffalo, buffalo, ny cataloger & metadata librarian, charleston county public library, charleston, sc visit the core jobs site for additional job openings and information on submitting your own job posting. register for a forum preconference! posted on august , august , leave a comment on register for a forum preconference! preconference registration is open for core forum! register for an all-day library buildings tour or an afternoon preconference offered on thursday, october , in baltimore, maryland. we create inclusive experiences! library buildings tour ( am– pm) choose your own adventure scholarly communication assessment rubrics ( – pm) presented by suzanna conrad, emily chan, daina dickman and nicole lawson crisis communication/message dissemination for libraries ( – pm) presented by gregg dodd, christine feldmann and meghan mccorkell evidence based practice in libraries ( – pm) presented by amanda click, claire walker wiley and meggan houlihan new connections in the rda toolkit ( pm– pm) presented by stephen hearn, robert maxwell and kathy glennan if you’ve already registered for forum and would like to add a preconference to your existing registration, please contact us. the forum is our inaugural conference for core and is ala’s first return to in-person events. it will bring together decision-makers and practitioners from the ala division that focuses on: access & equity assessment…continue reading new core jobs: august , posted on august , august , leave a comment on new core jobs: august , browse new job openings on the core jobs site community engagement and economic development services manager, seattle public library, wa electronic resources librarian, miami university libraries, oxford, oh systems coordinator, prairiecat (library consortium), coal valley or bolingbrook, il digital archivist, rice university, fondren library, houston, tx instruction librarian, radford university, mcconnell library, radford, va librarian i/ii – technology education librarian, virginia beach public library, va it programmer analyst, sonoma county library, rohnert park, ca visit the core jobs site for additional job openings and information on submitting your own job posting. interest group week recordings available posted on august , august , leave a comment on interest group week recordings available we had another successful interest group week last month with more than , registrations. a big thank you 💖 to all the chairs, moderators, and speakers, who put together great presentations and discussions! we’ve added links to the recordings on the ig week page, and you can find links to the slides in the interest groups that participated. summary, june core e-forum, “does better training lead to greater job satisfaction?” posted on july , july , in the june core e-forum, participants were asked to discuss the relationship between job training and job satisfaction. the discussion also addressed ways to organize a successful training program and using instructional design methods to improve technical services training. the first day’s discussion revolved around more general questions about the nature of on the job training and participants’ own roles in on the job training. the purpose of the second day was to address more specific topics of training structure and organization, and to examine the link between successful training and job satisfaction. it was obvious from the lively discussion on day that we all are almost always engaged in on the job training and feel strongly about its nature and organization. themes that consistently emerged were cross-training, documentation, continuity, understanding the reasons for changes, but also preserving institutional memory. the following questions were asked and answered many times…continue reading new core jobs: july , posted on july , july , browse new job openings on the core jobs site cataloging/metadata librarian, lehigh university, bethlehem, pa acquisitions librarian, marquette university libraries, milwaukee, wi cataloging and metadata librarian, marquette university libraries, milwaukee, wi university librarian, capilano university, north vancouver, wa librarian, slac national accelerator laboratory, menlo park, ca user experience and digital projects librarian, texas a&m university libraries, college station, tx branch manager ii, mid-columbia libraries, kennewick, ma head of library systems, virginia tech university libraries, blacksburg, va head of support desk operations, virginia tech university libraries, blacksburg, va cloud infrastructure engineer, massachusetts institute of technology libraries, cambridge, ma senior electronic resources librarian, rice university, fondren library, houston, tx visit the core jobs site for additional job openings and information on submitting your own job posting. new version of cataloging correctly for kids posted on july , july , cataloging library materials for children in the internet age has never been as challenging or as important. rda: resource description and access is now the descriptive standard, there are new ways to find materials using classifications, and subject heading access has been greatly enhanced by the keyword capabilities of today’s online catalogs. it’s the perfect moment to present a completely overhauled edition of this acclaimed bestseller. this new sixth edition guides catalogers, children’s librarians, and lis students in taking an effective approach towards materials intended for children and young adults. informed by recent studies of how children search, this handbook’s top-to-bottom revisions address areas such as: how rda applies to a variety of children’s materials, with examples provided;   authority control, bibliographic description, subject access, and linked data; electronic resources and other non-book materials; and cataloging for non-english-speaking and preliterate children. with advice contributed by experienced, practicing librarians, this one-stop…continue reading summary, may core e-forum, “advocacy for implementing faceted vocabularies” posted on july , july , during the may , core eforum, ‘we faceted our seatbelts, now what? advocacy for implementing faceted vocabularies in public facing interfaces,’ we hoped to open a wide discussion on implementing faceted vocabs, perceptions of how those headings should be used or displayed, and what making use of these vocabularies can accomplish for institutions and collections metadata. there were some, albeit unintentional, thematic developments over the course of the days, and we hope this report offers some glimpses of that development. for instance, we posed the question of whether inclusion of faceted terminology was perceived as duplication of lcsh terms present in the display for a resource’s metadata. many feel there is duplication, and many are simply open to considering the topic. one interesting thing is that even though the term may look the same to the user, that the term coming from a faceted vocabulary is actually different. but…continue reading new core jobs: july , posted on july , july , browse new job openings on the core jobs site acquisitions and electronic resources librarian, colgate university, hamilton, ny director of public library, town of needham, ma librarian ii ~ eresources & discovery librarian, university of maryland, baltimore county, baltimore, md head, scholarly communication & data services, unlv university libraries, university of nevada, las vegas, nv it library portfolio manager, multnomah county, portland, or ils manager, charleston county public library, charleston, sc manuscripts archivist, ohio university libraries, athens, oh development officer, charleston county public library, charleston, sc visit the core jobs site for additional job openings and information on submitting your own job posting. john cotton dana awards announced posted on july , august , ~ eight libraries awarded $ , grants from h.w. wilson foundation  ~ the john cotton dana (jcd) award winners, recognized for their strategic communications efforts, have been selected. the john cotton dana awards provide up to eight grants for libraries that demonstrate outstanding library public relations. the award is managed by the american library association’s core division and consists of $ , grants from the h.w. wilson foundation. the grants highlight campaigns feature a wide variety of strategies including: civic engagement programming, a virtual story time with more than million views, a virtual open house to celebrate a three-year renovation, and an awareness media campaign to highlight pandemic services. other winning campaigns include the launch of a local artist music streaming site that had to retool mid-campaign as libraries closed, a park & connect internet access campaign, a census campaign that increases the county’s self-response rate and a library…continue reading posts navigation older posts search for: search view posts by topic view posts by topicselect category access and equity  ( ) ala annual conference  ( ) assessment  ( ) awards and scholarships  ( ) buildings and operations  ( ) continuing education  ( ) core forum  ( ) e-forum summaries  ( ) general news  ( ) job listings  ( ) leadership and management  ( ) metadata and collections  ( )    preservation week  ( ) publications  ( ) technology  ( ) new core jobs outreach & education librarian, university of maryland, baltimore - health sciences & human services library -- data management librarian, university of maryland, baltimore - health sciences & human services library -- technical services coordinator (pdf), northeastern illinois university -- associate university librarian for digital strategies, university at buffalo -- associate university librarian for research, collections & outreach, university at buffalo -- cataloger & metadata librarian, charleston county public library   browse more job openings get core news in your email your name* email* view ala's personal data notification upcoming events august aug - sep fundamentals of metadata aug - oct fundamentals of digital library projects aug - oct fundamentals of acquisitions september sep - oct fundamentals of electronic resources acquisitions october oct - fundamentals of preservation oct - nov fundamentals of collection assessment no event found! load more past posts by month past posts by month select month august july june may april march february january december november october september august the official blog for core: leadership, infrastructure, futures, a division of ala. copyright © american library association. all rights reserved. twitter instagram youtube website none equinox open library initiative skip to content facebook-f twitter linkedin-in vimeo about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral subjectsplus services consulting workflow and advanced ils consultation data services web design it consultation migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations resource library collaborate communities evergreen koha coral subjectsplus equinox grants connect sales support donate contact us × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral subjectsplus services consulting workflow and advanced ils consultation data services web design it consultation migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations resource library collaborate communities evergreen koha coral subjectsplus equinox grants connect sales support donate contact us about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media × about our team newsroom events history ethics disclosures products evergreen koha fulfillment coral services consulting migration development hosting & support training & education learn equinoxedu tips & tricks conference presentations collaborate communities partnerships grants we provide connect sales support donate social media equinox provides innovative open source software for libraries of all types. extraordinary service. exceptional value. as a (c)( ) nonprofit corporation, equinox supports library automation by investing in open source software and providing technology services for libraries. products services ask us how » about equinox » news & events press release center for khmer studies goes live on koha ils learn more » press release vermont jazz center goes live on koha ils learn more » press release equinox open library initiative presents “developing open source tools to support libraries during covid- ” at the ala annual conference learn more » products & services koha is the first free and open source library automation package. equinox’s team includes some of koha’s core developers. learn more evergreen is a unique and powerful open source ils designed to support large, dispersed, and multi-tiered library networks. learn more equinox provides ongoing educational opportunities through equinoxedu, including live webinars, workshops, and online resources. learn more fulfillment is an open source interlibrary loan management system. fulfillment can be used alongside or in connection with any integrated library system.     learn more coral is an open source electronic resources management system. its interoperable modules allow libraries to streamline their management of electronic resources. learn more customized for your library consulting migration development hosting & support training & education why choose equinox? equinox is different from most ils providers. as a non-profit organization, our guiding principle is to provide a transparent, open software development process, and we release all code developed to publicly available repositories. equinox is experienced with serving libraries of all types in the united states and internationally. we’ve supported and migrated libraries of all sizes, from single library sites to full statewide implementations. equinox is technically proficient, with skilled project managers, software developers, and data services staff ready to assist you. we’ve helped libraries automating for the first time and those migrating from legacy ils systems. equinox knows libraries. more than fifty percent of our team are professional librarians with direct experience working in academic, government, public and special libraries. we understand the context and ecosystem of library software. sign up today for news & updates! please enable javascript in your browser to complete this form. email * name *first last organization i'd like to hear more about: koha evergreen equinoxedu other please describe: submit working with equinox has been like night and day. it's amazing to have a system so accessible to our patrons and easy to use. it has super-charged our library lending power! brooke matsonexecutive director, spark central equinox open library initiative hosts evergreen for the sclends library consortium. their technical support has been both prompt, responsive, and professional in reacting to our support requests during covid- . they have been a valuable consortium partner in meeting the needs of the member libraries and their patrons. chris yatessouth carolina state library working with equinox was great! they were able to migrate our entire consortium with no down time during working hours. the equinox team went the extra mile in helping missouri evergreen. colleen knightmissouri evergreen previous next twitter equinox olifollow equinox oli@equinoxoli· aug the latest from @ala_acrl: value of academic libraries committee presents survey results for covid- protocols for academic libraries. read more: https://bit.ly/ fcdcpl #covid #libraries reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug take a look at the tools we developed over the last year to assist libraries in responding to covid- . watch our @alalibrary conference presentation: https://splus.equinoxoli.org/subjects/equinox_covid tools reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug say goodbye to your spreadsheets and hello to #coral, an open source solution for managing electronic resources! the next equinoxedu: spotlight on coral erm is august , - pm edt - free and open. register: https://bit.ly/ fxqpma #equinoxedu #equinox @coral_erm #opensource reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug save the date! #chatopens is / w/ guest moderator @xsong discussing #electronicresourcemanagement, #opensource, and the #coral community. @coral_erm #oss #opensource #librarytech #tech reply on twitter retweet on twitter like on twitter twitter equinox oli@equinoxoli· aug the inaugural liblearnx is jan - w/ @ala_acrl & @alalibrary. call for proposals now open: https://bit.ly/ r fdw #librarylife #education reply on twitter retweet on twitter like on twitter twitter events open source twitter chat with guest moderator xiaoyan song #chatopens event august , ( - pm et). join us on twitter with the hashtag #chatopens as we discuss coral erm and electronic resource management with guest moderator xiaoyan song, electronic resources librarian read more equinoxedu: spotlight on coral erm event august , ( - pm et). say goodbye to your spreadsheets and hello to an open source solution for managing electronic resources! join us to learn how coral erm can help read more open source twitter chat with andrea buntz neiman #chatopens event july , ( - pm et).  join us on twitter with the hashtag #chatopens as we discuss fulfillment with development project manager, andrea buntz neiman. read more equinox open library initiative equinox open library initiative inc. is a (c) corporation devoted to the support of open source software for public libraries, academic libraries, school libraries, and special libraries. as the successor to equinox software, inc., equinox provides exceptional service and technical expertise delivered by experienced librarians and technical staff. equinox offers affordable, customized consulting services, software development, hosting, training, and technology support for libraries of all sizes and types. connect please enable javascript in your browser to complete this form. email * submit facebook-f twitter linkedin-in vimeo contact us info@equinoxoli.org .open.ils ( . . ) + . . . po box norcross, ga copyright © – equinox open library initiative. all rights reserved. privacy policy  |   terms of use  |   equinox library services canada  |   site map skip to content open toolbar accessibility tools increase text decrease text grayscale high contrast negative contrast light background links underline readable font reset d-lib magazine search d-lib:   home | about d-lib | current issue | archive | indexes | calendar | author guidelines | subscribe | contact d-lib   d - l i b   m a g a z i n e issn: - | https://doi.org/ . /dlib.magazine   d-lib magazine suspended publication of new issues in july . corporation for national research initiatives will continue to maintain the d-lib magazine archive, however, suggestions for long-term archiving are welcome, as are thoughts from the community on any alternate usage of the d-lib brand that would benefit the research community that has been served by d-lib's virtual pages over the last two decades. send suggestions to dlib@dlib.org.   d-lib magazine was produced by corporation for national research initiatives. prior to april , the magazine was sponsored by the defense advanced research project agency (darpa) on behalf of the digital libraries initiative under grant no. n - - - , and by the national science foundation (nsf) under grant no. iis- . from through , contributions by subscribers to the d-lib alliance provided financial support for the continued open access publication of d-lib magazine. in particular, d-lib thanks crossref, and hesburgh libraries at university of notre dame for their long-time membership in the d-lib alliance. privacy policy copyright© corporation for national research initiatives d-lib is registered in the u.s. patent and trademark office. futurearch, or the future of archives... futurearch, or the future of archives... a place for thoughts on hybrid archives and manuscripts at the bodleian library. this blog is no longer being updated born digital: guidance for donors, dealers, and archival repositories digital preservation: what i wish i knew before i started transcribe at the archive atlas of digital damages dayofdigitalarchives sprucing up the tikafileidentifier spruce mashup: th- th april media recognition: dv part media recognition: dv part media recognition: dv part digital preservation: what i wish i knew before i started what is ‘the future of the past of the web’? day of digital archives, another source for old software comparing software tools mobile forensics preserving born-digital video - what are good practices? hidden pages media recognition - floppy disks part preserving digital sound and vision: a briefing th april sharp font writer files got any older? world backup day advisory board meeting, march coyle's information coyle's information comments on the digital age, which, as we all know, is . thursday, august , phil agre and the gendered internet there is an article today in the washington post about the odd disappearance of a computer science professor named phil agre.  the article, entitled "he predicted the dark side of the internet years ago. why did no one listen?" reminded me of a post by agre in after a meeting of computer professionals for social responsibility. although it annoyed me at the time, a talk that i gave there triggered in him thoughts of gender issues;  as a women i was very much in the minority at the meeting,  but that was not the topic of my talk. but my talk also gave agre thoughts about the missing humanity on the web. i had a couple of primary concerns, perhaps not perfectly laid out, in my talk, "access, not just wires." i was concerned about what was driving the development of the internet and the lack of a service ethos regarding society. access at the time was talked about in terms of routers, modems, t- lines. there was no thought to organizing or preserving of online information. there was no concept of "equal access". there was no thought to how we would democratize the web such that you didn't need a degree in computer science to find what you needed. i was also very concerned about the commercialization of information. i was frustrated watching the hype as information was touted as the product of the information age. (this was before we learned that "you are the product, not the user" in this environment.) seen from the tattered clothes and barefoot world of libraries, the money thrown at the jumble of un-curated and unorganized "information" on the web was heartbreaking. i said: "it's clear to me that the information highway isn't much about information. it's about trying to find a new basis for our economy. i'm pretty sure i'm not going to like the way information is treated in that economy. we know what kind of information sells, and what doesn't. so i see our future as being a mix of highly expensive economic reports and cheap online versions of the national inquirer. not a pretty picture." - kcoyle in access, not just wires  little did i know how bad it would get. like many or most people, agre heard "libraries" and thought "female." but at least this caused him to think, earlier than many, about how our metaphors for the internet were inherently gendered. "discussing her speech with another cpsr activist ... later that evening, i suddenly connected several things that had been bothering me about the language and practice of the internet. the result was a partial answer to the difficult question, in what sense is the net "gendered"?" -  agre, tno, october this led agre to think about how we spoke then about the internet, which was mainly as an activity of "exploring." that metaphor is still alive with microsoft's internet explorer, but was also the message behind the main web browser software of the time, netscape navigator. he suddenly saw how "explore" was a highly gendered activity: "yet for many people, "exploring" is close to defining the experience of the net. it is clearly a gendered metaphor: it has historically been a male activity, and it comes down to us saturated with a long list of meanings related to things like colonial expansion, experiences of otherness, and scientific discovery. explorers often die, and often fail, and the ones that do neither are heroes and role models. this whole complex of meanings and feelings and strivings is going to appeal to those who have been acculturated into a particular male-marked system of meanings, and it is not going to offer a great deal of meaning to anyone who has not. the use of prestigious artifacts like computers is inevitably tied up with the construction of personal identity, and "exploration" tools offer a great deal more traction in this process to historically male cultural norms than to female ones." - agre, tno, october he decried the lack of social relationships on the internet, saying that although you know that other  people are there, you cannot see them.  "why does the space you "explore" in gopher or mosaic look empty even when it's full of other people?" - agre, tno, october none of us knew at the time that in the future some people would experience the internet entirely and exclusively as full of other people in the forms of facebook, twitter and all of the other sites that grew out of the embryos of bulletin board systems, the well, and aol. we feared that the future internet would  not have the even-handedness of libraries, but never anticipated that russian bots and qanon promoters would reign over what had once been a network for the exchange of scientific information. it hurts now to read through agre's post arguing for a more library-like online information system because it is pretty clear that we blew through that possibility even before the meeting and were already taking the first steps toward to where we are today. agre walked away from his position at ucla in and has not resurfaced, although there have been reports at times (albeit not recently) that he is okay. looking back, it should not surprise us that someone with so much hope for an online civil society should have become discouraged enough to leave it behind. agre was hoping for reference services and an internet populated with users with: "...the skills of composing clear texts, reading with an awareness of different possible interpretations, recognizing and resolving conflicts, asking for help without feeling powerless, organizing people to get things done, and embracing the diversity of the backgrounds and experiences of others." - agre, tno, october  oh, what world that would be! posted by karen coyle at : pm comments: labels: internet, women and technology monday, march , digitization wars, redux  (nb: ianal)   because this is long, you can download it as a pdf here. from to the book world (authors, publishers, libraries, and booksellers) was involved in the complex and legally fraught activities around google’s book digitization project. once known as “google book search,” the company claimed that it was digitizing books to be able to provide search services across the print corpus, much as it provides search capabilities over texts and other media that are hosted throughout the internet.  both the us authors guild and the association of american publishers sued google (both separately and together) for violation of copyright. these suits took a number of turns including proposals for settlements that were arcane in their complexity and that ultimately failed. finally, in the legal question was decided: digitizing to create an index is fair use as long as only minor portions of the original text are shown to users in the form of context-specific snippets.  we now have another question about book digitization: can books be digitized for the purpose of substituting remote lending in the place of the lending of a physical copy? this has been referred to as “controlled digital lending (cdl),” a term developed by the internet archive for its online book lending services. the archive has considerable experience with both digitization and providing online access to materials in various formats, and its open library site has been providing digital downloads of out of copyright books for more than a decade. controlled digital lending applies solely to works that are presumed to be in copyright.  controlled digital lending works like this: the archive obtains and retains a physical copy of a book. the book is digitized and added to the open library catalog of works. users can borrow the book for a limited time ( weeks) after which the book “returns” to the open library. while the book is checked out to a user no other user can borrow that “copy.” the digital copy is linked one-to-one with a physical copy, so if more than one copy of the physical book is owned then there is one digital loan available for each physical copy.  the archive is not alone in experimenting with lending of digitized copies: some libraries have partnered with the archive’s digitization and lending service to provide digital lending for library-owned materials. in the case of the archive the physical books are not available for lending. physical libraries that are experimenting with cdl face the added step of making sure that the physical book is removed from circulation while the digitized book is on loan, and reversing that on return of the digital book.  although cdl has an air of legality due to limiting lending to one user at a time, authors and publishers associations had raised objections to the practice. [nwu] however, in march of the archive took a daring step that pushed their version of the cdl into litigation: using the closing of many physical libraries due to the covid pandemic as its rationale, the archive renamed its lending service the national emergency library [nel] and eliminated the one-to-one link between physical and digital copies. ironically this meant that the archive was then actually doing what the book industry had accused it of (either out of misunderstanding or as an exaggeration of the threat posed): it was making and lending digital copies beyond its physical holdings. the archive stated that the national emergency library would last only until june of , presumably because by then the covid danger would have passed and libraries would have re-opened. in june the archive’s book lending service returned to the one-to-one model. also in june a suit was filed by four publishers (hachette, harpercollins, penguin random house, and wiley) in the us district court of the southern district of new york. [suit]  the controlled digital lending, like the google books project, holds many interesting questions about the nature of “digital vs physical,” not only in a legal sense but in a sense of what it means to read and to be a reader today. the lawsuit not only does not further our understanding of this fascinating question; it sinks immediately into hyperbole, fear-mongering, and either mis-information or mis-direction. that is, admittedly, the nature of a lawsuit. what follows here is not that analysis but gives a few of the questions that are foremost in my mind.  apples and oranges   each of the players in this drama has admirable reasons for their actions. the publishers explain in their suit that they are acting in support of authors, in particular to protect the income of authors so that they may continue to write. the authors’ guild provides some data on author income, and by their estimate the average full-time author earns less than $ , per year, putting them at poverty level.[aghard] (if that average includes the earnings of highly paid best selling authors, then the actual earnings of many authors is quite a bit less than that.)  the internet archive is motivated to provide democratic access to the content of books to anyone who needs or wants it. even before the pandemic caused many libraries to close the collection housed at the archive contained some works that are available only in a few research libraries. this is because many of the books were digitized during the google books project which digitized books from a small number of very large research libraries whose collections differ significantly from those of the public libraries available to most citizens.  where the pronouncements of both parties fail is in making a false equivalence between some authors and all authors, and between some books and all books, and the result is that this is a lawsuit pitting apples against oranges. we saw in the lawsuits against google that some academic authors, who may gain status based on their publications but very little if any income, did not see themselves as among those harmed by the book digitization project. notably the authors in this current suit, as listed in the bibliography of pirated books in the appendix to the lawsuit, are ones whose works would be characterized best as “popular” and “commercial,” not academic: james patterson, j. d. salinger, malcolm gladwell, toni morrison, laura ingalls wilder, and others. not only do the living authors here earn above the poverty level, all of them provide significant revenue for the publishers themselves. and all of the books listed are in print and available in the marketplace. no mention is made of out-of-print books, no academic publishers seem to be involved.  on the part of the archive, they state that their digitized books fill an educational purpose, and that their collection includes books that are not available in digital format from publishers: “ while overdrive, hoopla, and other streaming services provide patrons access to latest best sellers and popular titles,  the long tail of reading and research materials available deep within a library’s print collection are often not available through these large commercial services.  what this means is that when libraries face closures in times of crisis, patrons are left with access to only a fraction of the materials that the library holds in its collection.”[cdl-blog] this is undoubtedly true for some of the digitized books, but the main thesis of the lawsuit points out that the archive has digitized and is also lending current popular titles. the list of books included in the appendix of the lawsuit shows that there are in-copyright and most likely in-print books of a popular reading nature that have been part of the cdl. these titles are available in print and may also be available as ebooks from the publishers. thus while the publishers are arguing that current, popular books should not be digitized and loaned (apples), the archive is arguing that they are providing access to items not available elsewhere, and for educational purposes (oranges).  the law  the suit states that publishers are not questioning copyright law, only violations of the law. “for the avoidance of doubt, this lawsuit is not about the occasional transmission of a title under appropriately limited circumstances, nor about anything permissioned or in the public domain. on the contrary, it is about ia’s purposeful collection of truckloads of in-copyright books to scan, reproduce, and then distribute digital bootleg versions online.” ([suit] page ). this brings up a whole range of legal issues in regard to distributing digital copies of copyrighted works. there have been lengthy arguments about whether copyright law could permit first sale rights for digital items, and the answer has generally been no; some copyright holders have made the argument that since transfer of a digital file is necessarily the making of a copy there can be no first sale rights for those files. [ stsale] [ag ] some ebook systems, such as the kindle, have allowed time-limited person-to-person lending for some ebooks. this is governed by license terms between amazon and the publishers, not by the first sale rights of the analog world.  section of the copyright law does allow libraries and archives to make a limited number of copies the first point of section states that libraries can make a single copy of a work as long as ) it is not for commercial advantage, ) the collection is open to the public and ) the reproduction includes the copyright notice from the original. this sounds to be what the archive is doing. however, the next two sections (b and c) provide limitations on that first section that appear to put the archive in legal jeopardy: section “b” clarifies that copies may be made for preservation or security; section “c” states that the copies can be made if the original item is deteriorating and a replacement can no longer be purchased. neither of these applies to the archive’s lending.   in addition to its lending program, the archive provides downloads of scanned books in daisy format for those who are certified as visually impaired by the national library service for the blind and physically handicapped in the us. this is covered in a of the copyright law, title , which allows the distribution of copyrighted works in accessible formats. this service could possibly be cited as a justification of the scanning of in-copyright works at the archive, although without mitigating the complaints about lending those copies to others. this is a laudable service of the archive if scans are usable by the visually impaired, but the daisy-compatible files are based on the ocr’d text, which can be quite dirty. without data on downloads under this program it is hard to know the extent to which this program benefits visually impaired readers.   lending  most likely as part of the strategy of the lawsuit, very little mention is made of “lending.” instead the suit uses terms like “download” and “distribution” which imply that the user of the archive’s service is given a permanent copy of the book “with just a few clicks, any internet-connected user can download complete digital copies of in-copyright books from defendant.” ([suit] page ). “... distributing the resulting illegal bootleg copies for free over the internet to individuals worldwide.” ([suit] page ). publishers were reluctant to allow the creation of ebooks for many years until they saw that drm would protect the digital copies. it then was another couple of years before they could feel confident about lending - and by lending i mean lending by libraries. it appears that overdrive, the main library lending platform for ebooks, worked closely with publishers to gain their trust. the lawsuit questions whether the lending technology created by the archive can be trusted. “...plaintiffs have legitimate fears regarding the security of their works both as stored by ia on its servers” ([suit] page ). in essence, the suit accuses ia of a lack of transparency about its lending operation. of course, any collaboration between ia and publishers around the technology is not possible because the two are entirely at odds and the publishers would reasonably not cooperate with folks they see as engaged in piracy of their property.  even if the archive’s lending technology were proven to be secure, lending alone is not the issue: the archive copied the publishers’ books without permission prior to lending. in other words, they were lending content that they neither owned (in digital form) nor had licensed for digital distribution. libraries pay, and pay dearly, for the ebook lending service that they provide to their users. the restrictions on ebooks may seem to be a money-grab on the part of publishers, but from their point of view it is a revenue stream that cdl threatens.  is it about the money? “... ia rakes in money from its infringing services…” ([suit] page ). (note: publishers earn, ia “rakes in”) “moreover, while defendant promotes its non-profit status, it is in fact a highly commercial enterprise with millions of dollars of annual revenues, including financial schemes that provide funding for ia’s infringing activities. ([suit] page ). these arguments directly address section (a)( ) of title , section : “( ) the reproduction or distribution is made without any purpose of direct or indirect commercial advantage”.  at various points in the suit there are references to the archive’s income, both for its scanning services and donations, as well as an unveiled show of envy at the over $ million that brewster kahle and his wife have in their jointly owned foundation. this is an attempt to show that the archive derives “direct or indirect commercial advantage” from cdl. non-profit organizations do indeed have income, otherwise they could not function, and “non-profit” does not mean a lack of a revenue stream, it means returning revenue to the organization instead of taking it as profit. the argument relating to income is weakened by the fact that the archive is not charging for the books it lends. however, much depends on how the courts will interpret “indirect commercial advantage.” the suit argues that the archive benefits generally from the scanned books because this enhances the archive’s reputation which possibly results in more donations. there is a section in the suit relating to the “sponsor a book” program where someone can donate a specific amount to the archive to digitize a book. how many of us have not gotten a solicitation from a non-profit that makes statements like: “$ will feed a child for a day; $ will buy seed for a farmer, etc.”? the attempt to correlate free use of materials with income may be hard to prove.  reading  decades ago, when the service questia was just being launched (questia ceased operation december , ), questia sales people assured a group of us that their books were for “research, not reading.” google used a similar argument to support its scanning operation, something like “search, not reading.” the court decision in google’s case decided that google’s scanning was fair use (and transformative) because the books were not available for reading, as google was not presenting the full text of the book to its audience.[suit-g]  the archive has taken the opposite approach, a “books are for reading” view. beginning with public domain books, many from the google books project, and then with in-copyright books, the archive has promoted reading. it developed its own in-browser reading software to facilitate reading of the books online. [reader] (*see note below) although the publishers sued google for its scanning, they lost due to the “search, not reading” aspect of that project. the archive has been very clear about its support of reading, which takes the google justification off the table.  “moreover, ia’s massive book digitization business has no new purpose that is fundamentally different than that of the publishers: both distribute entire books for reading.” ([suit] page ).   however, the archive's statistics on loaned books shows that a large proportion of the books are used for minutes or less.  “patrons may be using the checked-out book for fact checking or research, but we suspect a large number of people are browsing the book in a way similar to browsing library shelves.” [ia ]   in its article on the cdl, the center for democracy and technology notes that “the majority of books borrowed through nel were used for less than minutes, suggesting that cdl’s primary use is for fact-checking and research, a purpose that courts deem favorable in a finding of fair use.” [cdt] the complication is that the same service seems to be used both for reading of entire books and as a place to browse or to check individual facts (the facts themselves cannot be copyrighted). these may involve different sets of books, once again making it difficult to characterize the entire set of digitized books under a single legal claim.  the publishers claim that the archive is competing with them using pirated versions of their own products. that leads us to the question of whether the archive’s books, presented for reading, are effectively substitutes for those of the publishers. although the archive offers actual copies, those copies that are significantly inferior to the original. however, the question of quality did not change the judgment in the lawsuit against copying of texts by kinko’s [kinkos], which produced mediocre photocopies from printed and bound publications. it seems unlikely that the quality differential will serve to absolve the archive from copyright infringement even though the poor quality of some of the books interferes with their readability.  digital is different publishers have found a way to monetize digital versions, in spite of some risks, by taking advantage of the ability to control digital files with technology and by licensing, not selling, those files to individuals and to libraries. it’s a “new product” that gets around first sale because, as it is argued, every transfer of a digital file makes a copy, and it is the making of copies that is covered by copyright law. [ stsale]  the upshot of this is that because a digital resource is licensed, not sold, the right to pass along, lend, or re-sell a copy (as per title section ) does not apply even though technology solutions that would delete the sender’s copy as the file safely reaches the recipient are not only plausible but have been developed. [resale]  “like other copyright sectors that license education technology or entertainment software, publishers either license ebooks to consumers or sell them pursuant to special agreements or terms.” ([suit] page ) “when an ebook customer obtains access to the title in a digital format, there are set terms that determine what the user can or cannot do with the underlying file.”([suit] page ) this control goes beyond the copyright holder’s rights in law: drm can exercise controls over the actual use of a file, limiting it to specific formats or devices, allowing or not allowing text-to-speech capabilities, even limiting copying to the clipboard. publishers and libraries  the suit claims that publishers and libraries have reached an agreement, an equilibrium. “to plaintiffs, libraries are not just customers but allies in a shared mission to make books available to those who have a desire to read, including, especially, those who lack the financial means to purchase their own copies.” ([suit] page ). in the suit, publishers contrast the archive’s operation with the relationship that publishers have with libraries. in contrast with the archive’s lending program, libraries are the “good guys.” “... the publishers have established independent and distinct distribution models for ebooks, including a market for lending ebooks through libraries, which are governed by different terms and expectations than print books.”([suit] page ). these “different terms” include charging much higher prices to libraries for ebooks, limiting the number of times an ebook can be loaned. [pricing ] [pricing ] “legitimate libraries, on the other hand, license ebooks from publishers for limited periods of time or a limited number of loans; or at much higher prices than the ebooks available for individual purchase.” [agol] the equilibrium of which publishers speak looks less equal from the library side of the equation: library literature is replete with stories about the avarice of publishers in relation to library lending of ebooks. some authors/publishers even speak out against library lending of ebooks, claiming that this cuts into sales. (this same argument has been made for physical books.) “if, as macmillan has determined, % of ebook reads are occurring through libraries and that percentage is only growing, it means that we are training readers to read ebooks for free through libraries instead of buying them. with author earnings down to new lows, we cannot tolerate ever-decreasing book sales that result in even lower author earnings.” [agliblend][ag ] the ease of access to digital books has become a boon for book sales, and ebook sales are now rising while hard copy sales fall. this economic factor is a motivator for any of those engaged with the book market. the archive’s cdl is a direct affront to the revenue stream that publishers have carved out for specific digital products. there are indications that the ease of borrowing of ebooks - not even needing to go to the physical library to borrow a book - is seen as a threat by publishers. this has already played out in other media, from music to movies.  it would be hard to argue that access to the archive’s digitized books is merely a substitute for library access. many people do not have actual physical library access to the books that the archive lends, especially those digitized from the collections of academic libraries. this is particularly true when you consider that the archive’s materials are available to anyone in the world with access to the internet. if you don’t have an economic interest in book sales, and especially if you are an educator or researcher, this expanded access could feel long overdue.  we need numbers  we really do not know much about the uses of the archive’s book collection. the lawsuit cites some statistics of “views” to show that the infringement has taken place, but the page in question does not explain what is meant by a “view”. archive pages for downloadable files of metadata records also report “views” which most likely reflect views of that web page, since there is nothing viewable other than the page itself. open library book pages give “currently reading” and “have read” stats, but these are tags that users can manually add to the page for the work. to compound things, the books cited in the suit have been removed from the lending service (and are identified in the archive as being in the collection “litigation works)  although numbers may not affect the legality of the controlled digital lending, the social impact of the archive’s contribution to reading and research would be clearer if we had this information. although the archive has provided a small number of testimonials, a proof of use in educational settings would bolster the claims of social benefit which in turn could strengthen a fair use defense.  notes (*) the nwu has a slide show [nwu ] that explains what it calls controlled digital lending at the archive. unfortunately this document conflates the archive's book reader with cdl and therefore muddies the water. it muddies it because it does not distinguish between sending files to dedicated devices (which is what kindle is) or dedicated software like what libraries use via software like libby, and the archive's use of a web-based reader. it is not beyond reason to suppose that the archive's reader software does not fully secure loaned items. the nwu claims that files are left in the browser cache that represent all book pages viewed: "there’s no attempt whatsoever to restrict how long any user retains these images". (i cannot reproduce this. in my minor experiments those files disappear at the end of the lending period, but this requires more concerted study.) however, this is not a fault of cdl but a fault of the reader software. the reader is software that works within a browser window. in general, electronic files that require secure and limited use are not used within browsers, which are general purpose programs. conflating the archive's reader software with controlled digital lending will only hinder understanding. already cdl has multiple components: digitization of in-copyright materials lending of digital copies of in-copyright materials that are owned by the library in a -to- relation to physical copies we can add # , the leakage of page copies via the browser cache, but i maintain that poorly functioning software does not automatically moot points and . i would prefer that we take each point on its own in order to get a clear idea of the issues. the nwu slides also refer to the archive's api which allows linking to individual pages within books. this is an interesting legal area because it may be determined to be fair use regardless of the legality of the underlying copy. this becomes yet another issue to be discussed by the legal teams, but it is separate from the question of controlled digital lending. let's stay focused. the international federation of library associations has issued its own statement on controlled digital lending at https://www.ifla.org/publications/node/ citations [ stsale] https://abovethelaw.com/ / /a-digital-take-on-the-first-sale-doctrine/  [ag ]https://www.authorsguild.org/industry-advocacy/reselling-a-digital-file-infringes-copyright/  [ag ] https://www.authorsguild.org/industry-advocacy/authors-guild-survey-shows-drastic- -percent-decline-in-authors-earnings-in-last-decade/  [aghard] https://www.authorsguild.org/the-writing-life/why-is-it-so-goddamned-hard-to-make-a-living-as-a-writer-today/ [aglibend] https://www.authorsguild.org/industry-advocacy/macmillan-announces-new-library-lending-terms-for-ebooks/ [agol] https://www.authorsguild.org/industry-advocacy/update-open-library/  [cdl-blog] https://blog.archive.org/ / / /controlled-digital-lending-and-open-libraries-helping-libraries-and-readers-in-times-of-crisis/  [cdt] https://cdt.org/insights/up-next-controlled-digital-lendings-first-legal-battle-as-publishers-take-on-the-internet-archive/  [kinkos] https://law.justia.com/cases/federal/district-courts/fsupp/ / / [nel] http://blog.archive.org/national-emergency-library/ [nwu] "appeal from the victims of controlled digital lending (cdl)". (retrieved - - )  [nwu ] "what is the internet archive doing with our books?" https://nwu.org/wp-content/uploads/ / /nwu-internet-archive-webinar- apr .pdf [pricing ] https://www.authorsguild.org/industry-advocacy/e-book-library-pricing-the-game-changes-again/  [pricing ] https://americanlibrariesmagazine.org/blogs/e-content/ebook-pricing-wars-publishers-perspective/  [reader] bookreader  [resale] https://www.hollywoodreporter.com/thr-esq/appeals-court-weighs-resale-digital-files-   [suit] https://www.courtlistener.com/recap/gov.uscourts.nysd. /gov.uscourts.nysd. . . .pdf  [suit-g] https://cases.justia.com/federal/appellate-courts/ca / - / - - - - .pdf?ts= posted by karen coyle at : am no comments: labels: controlled digital lending, copyright, ebooks, internet archive thursday, june , women designing those of us in the library community are generally aware of our premier "designing woman," the so-called "mother of marc," henriette avram. avram designed the machine reading cataloging record in the mid- 's, a record format that is still being used today. marc was way ahead of its time using variable length data fields and a unique character set that was sufficient for most european languages, all thanks to avram's vision and skill. i'd like to introduce you here to some of the designing women of the university of california library automation project, the project that created one of the first online catalogs in the beginning of the 's, melvyl. briefly, melvyl was a union catalog that combined data from the libraries of the nine (at that time) university of california campuses. it was first brought up as a test system in and went "live" to the campuses in . work on the catalog began in or around , and various designs were put forward and tested. key designers were linda gallaher-brown, who had one of the first masters degrees in computer science from ucla, and kathy klemperer, who like many of us was a librarian turned systems designer. we were struggling with how to create a functional relational database of bibliographic data (as defined by the marc record) with computing resources that today would seem laughable but were "cutting edge" for that time. i remember linda remarking that during one of her school terms she returned to her studies to learn that the newer generation of computers would have this thing called an "operating system" and she thought "why would you need one?" by the time of this photo she had come to appreciate what an operating system could do for you. the one we used at the time was ibm's os / . kathy klemperer was the creator of the database design diagrams that were so distinctive we called them "klemperer-grams." here's one from : melvyl database design klemperer-gram, drawn and lettered by hand, not only did these describe a workable database design, they were impressively beautiful. note that this not only predates the proposed rda "database scenario" for a relational bibliographic design by years, it provides a more detailed and most likely a more accurate such design. rda "scenario " data design, in the early days of the catalog we had a separate file and interface for the cataloged serials based on a statewide project (including the california state universities). although it was possible to catalog serials in the marc format, the systems that had the detailed information about which issues the libraries held was stored in serials control databases that were separate from the library catalog, and many serials were represented by crusty cards that had been created decades before library automation. the group below developed and managed the calls (california academic library list of serials). four of those pictured were programmers, two were serials data specialists, and four had library degrees. obviously, these are overlapping sets. the project heads were barbara radke (right) and theresa montgomery (front, second from right). at one point while i was still working on the melvyl project, but probably around the very late 's or early 's, i gathered up some organization charts that had been issued over the years and quickly calculated that during its history the project the technical staff that had created this early marvel had varied from / to / female. i did some talks at various conferences in which i called melvyl a system "created by women." at my retirement in i said the same thing in front of the entire current staff, and it was not well-received by all. in that audience was one well-known member of the profession who later declared that he felt women needed more mentoring in technology because he had always worked primarily with men, even though he had indeed worked in an organization with a predominantly female technical staff, and another colleague who was incredulous when i stated once that women are not a minority, but over % of the world's population. he just couldn't believe it. while outright discrimination and harassment of women are issues that need to be addressed, the invisibility of women in the eyes of their colleagues and institutions is horribly damaging. there are many interesting projects, not the least the wikipedia women in red, that aim to show that there is no lack of accomplished women in the world, it's the acknowledgment of their accomplishments that falls short. in the library profession we have many women whose stories are worth telling. please, let's make sure that future generations know that they have foremothers to look to for inspiration. posted by karen coyle at : am comment: labels: library catalogs, library history, open data, women and technology monday, may , i've been trying to capture what i remember about the early days of library automation. mostly my memory is about fun discoveries in my particular area (processing marc records into the online catalog). i did run into an offprint of some articles in ital from (*) which provide very specific information about the technical environment, and i thought some folks might find that interesting. this refers to the university of california melvyl union catalog, which at the time had about , records. operating system: ibm / programming language: pl/i cpu: megabytes of memory storage: disk drives, ~ gigabytes dbms: adabas the disk drives were each about the size of an industrial washing machine. in fact, we referred to the room that held them as "the laundromat." telecommunications was a big deal because there was no telecommunications network linking the libraries of the university of california. there wasn't even one connecting the campuses at all. the article talks about the various possibilities, from an x. network to the new tcp/ip protocol that allows "internetwork communication." the first network was a set of dedicated lines leased from the phone company that could transmit characters per second (character = byte) to about ascii terminals at each campus over a baud line. there was a hope to be able to double the number of terminals. in the speculation about the future, there was doubt that it would be possible to open up the library system to folks outside of the uc campuses, much less internationally. (melvyl was one of the early libraries to be open access worldwide over the internet, just a few years later.) it was also thought that libraries would charge other libraries to view their catalogs, kind of like an inter-library loan. and for anyone who has an interest in z . , one section of the article by david shaughnessy and clifford lynch on telecommunications outlines a need for catalog-to-catalog communication which sounds very much like the first glimmer of that protocol. ----- (*) various authors in a special edition: ( ). in-depth: university of california melvyl. information technology and libraries, ( ) i wish i could give a better citation but my offprint does not have page numbers and i can't find this indexed anywhere. (cue here the usual irony that libraries are terrible at preserving their own story.) posted by karen coyle at : am no comments: labels: library catalogs, library history monday, april , ceci n'est pas une bibliothèque on march , , the internet archive announced that it would "suspend waitlists for the . million (and growing) books in our lending library," a service they then named the national emergency library. these books were previously available for lending on a one-to-one basis with the physical book owned by the archive, and as with physical books users would have to wait for the book to be returned before they could borrow it. worded as a suspension of waitlists due to the closure of schools and libraries caused by the presence of the coronavirus- , this announcement essentially eliminated the one-to-one nature of the archive's controlled digital lending program. publishers were already making threatening noises about the digital lending when it adhered to lending limitations, and surely will be even more incensed about this unrestricted lending. i am not going to comment on the legality of the internet archive's lending practices. legal minds, perhaps motivated by future lawsuits, will weigh in on that. i do, however, have much to say on the use of the term "library" for this set of books. it's a topic worthy of a lengthy treatment, but i'll give only a brief account here. library … bibliothÈque … bibliotek the roots “libr…” and “biblio…” both come down to us from ancient words for trees and tree bark. it is presumed that said bark was the surface for early writings. “libr…”, from the latin word liber meaning “book,” in many languages is a prefix that indicates a bookseller’s shop, while in english it has come to mean a collection of books and from that also the room or building where books are kept. “biblio…” derives instead from the greek biblion (one book) and biblia (books, plural). we get the word bible through the greek root, which leaked into old latin and meant the book. therefore it is no wonder that in the minds of many people, books = library.  in fact, most libraries are large collections of books, but that does not mean that every large collection of books is a library. amazon has a large number of books, but is not a library; it is a store where books are sold. google has quite a few books in its "book search" and even allows you to view portions of the books without payment, but it is also not a library, it's a search engine. the internet archive, amazon, and google all have catalogs of metadata for the books they are offering, some of it taken from actual library catalogs, but a catalog does not make a quantity of books into a library. after all, home depot has a catalog, walmart has a catalog; in essence, any business with an inventory has a catalog. "...most libraries are large collections of books, but that does not mean that every large collection of books is a library." the library test first, i want to note that the internet archive has met the state of california test to be defined as a library, and this has made it possible for the archive to apply for library-related grants for some of its projects. that is a good thing because it has surely strengthened the archive and its activities. however, it must be said that the state of california requirements are pretty minimal, and seem to be limited to a non-profit organization making materials available to the general public without discrimination. there doesn't seem to be a distinction between "library" and "archive" in the state legal code, although librarians and archivists would not generally consider them easily lumped together as equivalent services. the collection the archive's blog post says "the internet archive currently lends about as many as a us library that serves a population of about , ." as a comparison, i found in the statistics gathered by the california state library those of the benicia public library in benicia california. benicia is a city with a population of , ; the library has about , books. well, you might say, that's not as good as over one million books at the internet archive. but, here's the thing: those are not , random books, they are books chosen to be, as far as the librarians could know, the best books for that small city. if benicia residents were, for example, primarily chinese-speaking, the library would surely have many books in chinese. if the city had a large number of young families then the children's section would get particular attention. the users of the internet archive's books are a self-selected (and currently un-defined) set of internet users. equally difficult to define is the collection that is available to them: this library brings together all the books from phillips academy andover and marygrove college, and much of trent university’s collections, along with over a million other books donated from other libraries to readers worldwide that are locked out of their libraries. each of these is (or was, in the case of marygrove, which has closed) a collection tailored to the didactic needs of that institution. how one translates that, if one can, to the larger internet population is unknown. that a collection has served a specific set of users does not mean that it can serve all users equally well. then there is that other million books, which are a complete black box. library science i've argued before against dumping a large and undistinguished set of books on a populace, regardless of the good intentions of those doing so. why not give the library users of a small city these one million books? the main reason is the ability of the library to fulfill the laws of library science: books are for use. every reader his or her book. every book its reader. save the time of the reader. the library is a growing organism. [ ] the online collection of the internet archive nicely fulfills laws and : the digital books are designed for use, and the library can grow somewhat indefinitely. the other three laws are unfortunately hindered by the somewhat haphazard nature of the set of books, combined with the lack of user services. of the goals of librarianship, matching readers to books is the most difficult. let's start with law , "every book its reader." when you follow the url to the national emergency library, you see something like this: the lack of cover art is not the problem here. look at what books you find: two meeting reports, one journal publication, and a book about hand surgery, all from . scroll down for a bit and you will find it hard to locate items that are less obscure than this, although undoubtedly there are some good reads in this collection. these are not the books whose readers will likely be found in our hypothetical small city. these are books that even some higher education institutions would probably choose not to have in their collections. while these make the total number of available books large, they may not make the total number of useful books large. winnowing this set to one or more (probably more) wheat-filled collections could greatly increase the usability of this set of books. "while these make the total number of available books large, they may not make the total number of useful books large." a large "anything goes" set of documents is a real challenge for laws and : every reader his or her book, and save the time of the reader. the more chaff you have the harder it is for a library user to find the wheat they are seeking. the larger the collection the more of the burden is placed on the user to formulate a targeted search query and to have the background to know which items to skip over. the larger the retrieved set, the less likely that any user will scroll through the entire display to find the best book for their purposes. this is the case for any large library catalog, but these libraries have built their collection around a particular set of goals. those goals matter. goals are developed to address a number of factors, like: what are the topics of interest to my readers and my institution? how representative must my collection be in each topic area? what are the essential works in each topic area? what depth of coverage is needed for each topic? [ ] if we assume (and we absolutely must assume this) that the user entering the library is seeking information that he or she lacks, then we cannot expect users to approach the library as an expert in the topic being researched. although anyone can type in a simple query, fewer can assess the validity and the scope of the results. a search on "california history" in the national emergency library yields some interesting-looking books, but are these the best books on the topic? are any key titles missing? these are the questions that librarians answer when developing collections. the creation of a well-rounded collection is a difficult task. there are actual measurements that can be run against library collections to determine if they have the coverage that can be expected compared to similar libraries. i don't know if any such statistical packages can look beyond quantitative measures to judge the quality of the collection; the ones i'm aware of look at call number ranges, not individual titles.  there library service the archive's own documentation states that "the internet archive focuses on preservation and providing access to digital cultural artifacts. for assistance with research or appraisal, you are bound to find the information you seek elsewhere on the internet." after which it advises people to get help through their local public library. helping users find materials suited to their need is a key service provided by libraries. when i began working in libraries in the dark ages of the 's, users generally entered the library and went directly to the reference desk to state the question that brought them to the institution. this changed when catalogs went online and were searchable by keyword, but prior to then the catalog in a public library was primarily a tool for librarians to use when helping patrons. still, libraries have real or virtual reference desks because users are not expected to have the knowledge of libraries or of topics that would allow them to function entirely on their own. and while this is true for libraries it is also true, perhaps even more so, for archives whose collections can be difficult to navigate without specialized information. admitting that you give no help to users seeking materials makes the use of the term "library" ... unfortunate. what is to be done? there are undoubtedly a lot of useful materials among the digital books at the internet archive. however, someone needing materials has no idea whether they can expect to find what they need in this amalgamation. the burden of determining whether the archive's collection might suit their needs is left entirely up to the members of this very fuzzy set called "internet users." that the collection lends at the rate of a public library serving a population of , shows that it is most likely under-utilized. because the nature of the collection is unknown one can't approach, say, a teacher of middle-school biology and say: "they've got what you need." yet the archive cannot implement a policy to complete areas of the collection unless it knows what it has as compared to known needs. "... these warehouses of potentially readable text will remain under-utilized until we can discover a way to make them useful in the ways that libraries have proved to be useful." i wish i could say that a solution would be simple - but it would not. for example, it would be great to extract from this collection works that are commonly held in specific topic areas in small, medium and large libraries. the statistical packages that analyze library holdings all are, afaik, proprietary. (if anyone knows of an open source package that does this, please shout it out!) if would also be great to be able to connect library collections of analog books to their digital equivalents. that too is more complex than one would expect, and would have to be much simpler to be offered openly. [ ] while some organizations move forward with digitizing books and other hard copy materials, these warehouses of potentially readable text will remain under-utilized until we can discover a way to make them useful in the ways that libraries have proved to be useful. this will mean taking seriously what modern librarianship has developed over its circa centuries, and in particular those laws that give us a philosophy to guide our vision of service to the users of libraries. ----- [ ] even if you are familiar with the laws you may not know that ranganathan was not as succinct as this short list may imply. the book in which he introduces these concepts is over pages long, with extended definitions and many homey anecdotes and stories. [ ] a search on "collection development policy" will yield many pages of policies that you can peruse. to make this a "one click" here are a few *non-representative* policies that you can take a peek at: hennepin county (public) lansing community college (community college) stanford university, science library (research library) [ ] dan scott and i did a project of this nature with a bay area public library and it took a huge amount of human intervention to determine whether the items matched were really "equivalent". that's a discussion for another time, but, man, books are more complicated than they appear. posted by karen coyle at : am no comments: labels: books, digital libraries, openlibrary monday, february , use the leader, luke! if you learned the marc format "on the job" or in some other library context you may have learned that the record is structured as fields with -digit tags, each with two numeric indicators, and that subfields have a subfield indicator (often shown as "$" because it is a non-printable character) and a single character subfield code (a-z, - ). that is all true for the marc records that libraries create and process, but the machine readable cataloging standard (z . or iso ) has other possibilities that we are not using. our "marc" (currently marc ) is a single selection from among those possibilities, in essence an application profile of the marc standard. the key to the possibilities afforded by marc is in the marc leader, and in particular in two positions that our systems generally ignore because they always contain the same values in our data: leader byte -- indicator count leader byte -- subfield code length in marc records, leader byte is always " " meaning that fields have -byte indicators, and leader byte is always because the subfield code is always two characters in length. that was a decision made early on in the life of marc records in libraries, and it's easy to forget that there were other options that were not taken. let's take a short look at the possibilities the record format affords beyond our choice. both of these leader positions are single bytes that can take values from to . an application could use the marc record format and have zero indicators. it isn't hard to imagine an application that has no need of indicators or that has determined to make use of subfields in their stead. as an example, the provenance of vocabulary data for thesauri like lcsh or the art and architecture thesaurus could always be coded in a subfield rather than in an indicator: $a religion and science $ lcsh another common use of indicators in marc is to give a byte count for the non-filing initial articles on title strings. istead of using an indicator value for this some libraries outside of the us developed a non-printing code to make the beginning and end of the non-filing portion. i'll use backslashes to represent these codes in this example: $a \the \birds of north america i am not saying that all indicators in marc should or even could be eliminated, but that we shouldn't assume that our current practice is the only way to code data. in the other direction, what if you could have more than two indicators? the marc record would allow you have have as many as nine. in addition, there is nothing to say that each byte in the indicator has to be a separate data element; you could have nine indicator positions that were defined as two data elements ( + ), or some other number ( + + ). expanding the number of indicators, or beginning with a larger number, could have prevented the split in provenance codes for subject vocabularies between one indicator value and the overflow subfield, $ , when the number exceeded the capability of a single numerical byte. having three or four bytes for those codes in the indicator and expanding the values to include a-z would have been enough to include the full list of authorities for the data in the indicators. (although i would still prefer putting them all in $ using the mnemonic codes for ease of input.) in the first university of california union catalog in the early 's we expanded the marc indicators to hold an additional two bytes (or was it four?) so that we could record, for each marc field, which library had contributed it. our union catalog record was a composite marc record with fields from any and all of the over libraries across the university of california system that contributed to the union catalog as dozen or so separate record feeds from oclc and rlin. we treated the added indicator bytes as sets of bits, turning on bits to represent the catalog feeds from the libraries. if two or more libraries submitted exactly the same marc field we stored the field once and turned on a bit for each separate library feed. if a library submitted a field for a record that was new to the record, we added the field and turned on the appropriate bit. when we created a user display we selected fields from only one of the libraries. (the rules for that selection process were something of a secret so as not to hurt anyone's feelings, but there was a "best" record for display.) it was a multi-library marc record, made possible by the ability to use more than two indicators. now on to the subfield code. the rule for marc is that there is a single subfield code and that is a lower case a-z and - . the numeric codes have special meaning and do not vary by field; the alphabetic codes aare a bit more flexible. that gives use possible subfields per tag, plus the pre-defined numeric ones. the marc standard has chosen to limit the alphabetic subfield codes to lower case characters. as the fields reached the limits of the available subfield codes (and many did over time) you might think that the easiest solution would be to allow upper case letters as subfield codes. although the subfield code limitation was reached decades ago for some fields i can personally attest to the fact that suggesting the expansion of subfield codes to upper case letters was met with horrified glares at the marc standards meeting. while clearly in the range of a-z seemed ample, that has not be the case for nearly half of the life-span of marc. the marc leader allows one to define up to characters total for subfield codes. the value in this leader position includes the subfield delimiter so this means that you can have a subfield delimiter and up to characters to encode a subfield. even expanding from a-z to aa-zz provides vastly more possibilities, and allow upper case as well give you a dizzying array of choices. the other thing to mention is that there is no prescription that field tags must be numeric. they are limited to three characters in the marc standard, but those could be a-z, a-z, - , not just - , greatly expanding the possibilities for adding new tags. in fact, if you have been in the position to view internal systems records in your vendor system you may have been able to see that non-numeric tags have been used for internal system purposes, like noting who made each edit, whether functions like automated authority control have been performed on the record, etc. many of the "violations" of the marc rules listed here have been exploited internally -- and since early days of library systems. there are other modifiable leader values, in particular the one that determines the maximum length of a field, leader . marc has leader set at " " meaning that fields cannot be longer than . that could be longer, although the record size itself is set at only bytes, so a record cannot be longer than . however, one could limit fields to (leader value set at " ") for an application that does less pre-composing of data compared to marc and therefore comfortably fits within a shorter field length.  the reason that has been given, over time, why none of these changes were made was always: it's too late, we can't change our systems now. this is, as caesar might have said, cacas tauri. systems have been able to absorb some pretty intense changes to the record format and its contents, and a change like adding more subfield codes would not be impossible. the problem is not really with the marc record but with our inability (or refusal) to plan and execute the changes needed to evolve our systems. we could sit down today and develop a plan and a timeline. if you are skeptical, here's an example of how one could manage a change in length to the subfield codes: a marc record is retrieved for editing read the leader of the marc record if the value is " " and you need to add a new subfield that uses the subfield code plus two characters, convert all of the subfield codes in the record: $a becomes $aa, $b becomes $ba, etc. $ becomes $ , $ becomes $ , etc. leader code is changed to " " (alternatively, convert all records opened for editing) a marc record is retrieved for display read the leader of the marc record if the value is " " use the internal table of subfield codes for records with the value " " if the value is " " use the internal table of subfield codes for records with the value " " sounds impossible? we moved from aacr to aacr , and now from aacr to rda without going back and converting all of our records to the new content.  we have added new fields to our records, such as the , , for rda values, without converting all of the earlier records in our files to have these fields. the same with new subfields, like $ , which has only been added in recent years. our files have been using mixed record types for at least a couple of generations -- generations of systems and generations of catalogers. alas, the time to make these kinds of changes this was many years ago. would it be worth doing today? that depends on whether we anticipate a change to bibframe (or some other data format) in the near future. changes do continue to be made to the marc record; perhaps it would have a longer future if we could broach the subject of fixing some of the errors that were introduced in the past, in particular those that arose because of the limitations of marc that could be rectified with an expansion of that record standard. that may also help us not carry over some of the problems in marc that are caused by these limitations to a new record format that does not need to be limited in these ways. epilogue although the marc  record was incredibly advanced compared to other data formats of its time (the mid- 's), it has some limitations that cannot be overcome within the standard itself. one obvious one is the limitation of the record length to bytes. another is the fact that there are only two levels of nesting of data: the field and the subfield. there are times when a sub-subfield would be useful, such as when adding information that relates to only one subfield, not the entire field (provenance, external url link). i can't advocate for continuing the data format that is often called "binary marc" simply because it has limitations that require work-arounds. marcxml, as defined as a standard, gets around the field and record length limitations, but it is not allowed to vary from the marc limitations on field and subfield coding. it would be incredibly logical to move to a "non-binary" record format (xml, json, etc.) beginning with the existing marc and  to allow expansions where needed. it is the stubborn adherence to the iso format really has limited library data, and it is all the more puzzling because other solutions that can keep the data itself intact have been available for many decades. posted by karen coyle at : am no comments: labels: marc tuesday, january , pamflets i was always a bit confused about the inclusion of "pamflets" in the subtitle of the decimal system, such as this title page from the edition: did libraries at the time collect numerous pamphlets? for them to be the second-named type of material after books was especially puzzling. i may have discovered an answer to my puzzlement, if not the answer, in andrea costadoro's work: a "pamphlet" in was not (necessarily) what i had in mind, which was a flimsy publication of the type given out by businesses, tourist destinations, or public health offices. in the 's it appears that a pamphlet was a literary type, not a physical format. costadoro says: "it has been a matter of discussion what books should be considered pamphlets and what not. if this appellation is intended merely to refer to the size of the book, the question can be scarecely worth considering ; but if it is meant to refer to the nature of a work, it may be considered to be of the same class and to stand in the same connexion with the word treatise as the words tract ; hints ; remarks ; &c, when these terms are descriptive of the nature of the books to which they are affixed." (p. ) to be on the shelves of libraries, and cataloged, it is possible that these pamphlets were indeed bound, perhaps by the library itself.  the library of congress genre list today has a cross-reference from "pamphlet" to "tract (ephemera)". while costadoro's definition doesn't give any particular subject content to the type of work, lc's definition says that these are often issued by religious or political groups for proselytizing. so these are pamphlets in the sense of the political pamphlets of our revolutionary war. today they would be blog posts, or articles in buzzfeed or slate or any one of hundreds of online sites that post such content. churches i have visited often have short publications available near the entrance, and there is always the watchtower, distributed by jehovah's witnesses at key locations throughout the world, and which is something between a pamphlet (in the modern sense) and a journal issue. these are probably not gathered in most libraries today. in dewey's time the printing (and collecting by libraries) of sermons was quite common. in a world where many people either were not literate or did not have access to much reading material, the sunday sermon was a "long form" work, read by a pastor who was probably not as eloquent as the published "stars" of the sunday gatherings. some sermons were brought together into collections and published, others were published (and seemingly bound) on their own.  dewey is often criticized for the bias in his classification, but what you find in the early editions serves as a brief overview of the printed materials that the us (and mostly east coast) culture of that time valued.  what now puzzles me is what took the place of these tracts between the time of dewey and the web. i can find archives of political and cultural pamphlets in various countries and they all seem to end around the 's- 's, although some specific collections, such as the samizdat publications in the soviet union, exist in other time periods. of course the other question now is: how many of today's tracts and treatises will survive if they are not published in book form? posted by karen coyle at : pm no comments: labels: classification, library history older posts home subscribe to: posts (atom) copyright coyle's information by karen coyle is licensed under a creative commons attribution-noncommercial . united states license. karen karen coyle where i'll be dc , ottawa, sep - what i'm reading paper, m. kurlansky the coming of the third reich, r. j. evans a room of ones own, v. woolf blog archive ▼  ( ) ▼  august ( ) phil agre and the gendered internet ►  march ( ) ►  ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  august ( ) ►  ( ) ►  october ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  january ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  february ( ) ►  ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) labels library catalogs ( ) googlebooks ( ) frbr ( ) cataloging ( ) linked data ( ) rda ( ) rdf ( ) oclc ( ) marc ( ) copyright ( ) metadata ( ) women and technology ( ) digital libraries ( ) semantic web ( ) digitization ( ) books ( ) intellectual freedom ( ) bibframe ( ) open access ( ) internet ( ) women technology ( ) standards ( ) classification ( ) ebooks ( ) google ( ) rda dcmi ( ) classification lcsh ( ) kosovo ( ) dcmi ( ) er models ( ) openlibrary ( ) authority control ( ) libraries ( ) skyriver ( ) wish list ( ) identifiers ( ) library history ( ) open data ( ) reading ( ) schema.org ( ) search ( ) vocabularies ( ) wikipedia ( ) knowledge organization ( ) privacy ( ) women ( ) drm ( ) foaf ( ) internet archive ( ) lrm ( ) rfid ( ) shacl ( ) application profiles ( ) controlled digital lending ( ) lcsh intell ( ) linux ( ) names ( ) politics ( ) about me karen coyle berkeley, ca, united states librarian, techie, social commentator, once called "public intellectual" by someone who couldn't think of a better title. view my complete profile simple theme. theme images by gaffera. powered by blogger. scriptio continua scriptio continua thoughts on software development, digital humanities, the ancient world, and whatever else crosses my radar. all original content herein is licensed under a creative commons attribution license. reminder experiencing technical difficulties thank you dh data talk outside the tent missing dh first contact you did _what_? form-based xml editing how to apologize a spot of mansplaining tei in other formats; part the second: theory tei in other formats; part the first: html humanities data curation interfaces and models tei is a text modelling language i will never not ever type an angle bracket (or iwnnetaab for short) dh tea leaves that bug bit me #alt-ac careers: digital humanities developer addenda et corrigenda making a new numbers server for papyri.info #apa converting apis object artefact script stomping on innovation killers preliminary inventory of digital collections by jason ronallo preliminary inventory of digital collections by jason ronallo incomplete thoughts on digital libraries. upgrading from ubuntu . to . choosing a path forward for iiif audio and video testing dash and hls streams on linux client-side video tricks for iiif iiif examples # : wellcome library closing in on client-side iiif content search automatically extracting keyphrases from text ted lawless automatically extracting keyphrases from text - - i've posted an explainer/guide to how we are automatically extracting keyphrases for constellate, a new text analytics service from jstor and portico. we are defining keyphrases as up to three word phrases that are key, or important, to the overall subject matter of the document. keyphrase is often used interchangeably with keywords, but we are opting to use the former since it's more descriptive. we did a fair amount of reading to grasp prior art in this area, extracting keyphrases is a long standing research topic in information retrieval and natural language processing, and ended up developing a custom solution based on term frequency in the constellate corpus. if you are interested in this work generally, and not just the constellate implementation, burton dewilde has published an excellent primer on automated keyphrase extraction. more information about constellate can be found here. disclaimer: this is a work-related post. i don't intend to speak for my employer, ithaka. any opinions expressed are my own. ted lawless, lawlesst at gmail github linkedin twitter none blog.cbeer.info chris beer chris@cbeer.info cbeer _cb_ may , autoscaling aws elastic beanstalk worker tier based on sqs queue length we are deploying a rails application (for the hydra-in-a-box project) to aws elastic beanstalk. elastic beanstalk offers us easy deployment, monitoring, and simple auto-scaling with a built-in dashboard and management interface. our application uses several potentially long-running background jobs to characterize, checksum, and create derivates for uploaded content. since we’re deploying this application within aws, we’re also taking advantage of the simple queue service (sqs), using the active-elastic-job gem to queue and run activejob tasks. elastic beanstalk provides settings for “web server” and “worker” tiers. web servers are provisioned behind a load balancer and handle end-user requests, while workers automatically handle background tasks (via sqs + active-elastic-job). elastic beanstalk provides basic autoscaling based on a variety of metrics collected from the underlying instances (cpu, network, i/o, etc), although, while sufficient for our “web server” tier, we’d like to scale our “worker” tier based on the number of tasks waiting to be run. currently, though, the ability to auto-scale the worker tier based on the underlying queue depth isn’t enable through the elastic beanstak interface. however, as beanstalk merely manages and aggregates other aws resources, we have access to the underlying resources, including the autoscaling group for our environment. we should be able to attach a custom auto-scaling policy to that auto scaling group to scale based on additional alarms. for example, let’s we want to add additional worker nodes if there are more than tasks for more than minutes (and, to save money and resources, also remove worker nodes when there are no tasks available). to create the new policy, we’ll need to: find the appropriate auto-scaling group by finding the auto-scaling group with the elasticbeanstalk:environment-id that matches the worker tier environment id; find the appropriate sqs queue for the worker tier; add auto-scaling policies that add (and remove) instances to the autoscaling group; create a new cloudwatch alarm that measures the sqs queue exceeds our configured depth ( ) that triggers the auto-scaling policy to add additional worker instances whenever the alarm is triggered; and, conversely, create a new cloudwatch alarm that measures the sqs queue hits that trigger the auto-scaling action to removes worker instances whenever the alarm is triggered. and, similarly for scaling back down. even though there are several manual steps, they aren’t too difficult (other than discovering the various resources we’re trying to orchestrate), and using elastic beanstalk is still valuable for the rest of its functionality. but, we’re in the cloud, and really want to automate everything. with a little cloudformation trickery, we can even automate creating the worker tier with the appropriate autoscaling policies. first, knowing that the cloudformation api allows us to pass in an existing sqs queue for the worker tier, let’s create an explicit sqs queue resource for the workers: "defaultqueue" : { "type" : "aws::sqs::queue", } and wire it up to the beanstalk application by setting the aws:elasticbeanstalk:sqsd:workerqueueurl (not shown: sending the worker queue to the web server tier): "workersconfigurationtemplate" : { "type" : "aws::elasticbeanstalk::configurationtemplate", "properties" : { "applicationname" : { "ref" : "aws::stackname" }, "optionsettings" : [ ..., { "namespace": "aws:elasticbeanstalk:sqsd", "optionname": "workerqueueurl", "value": { "ref" : "defaultqueue"} } } } }, "workerenvironment": { "type": "aws::elasticbeanstalk::environment", "properties": { "applicationname": { "ref" : "aws::stackname" }, "description": "worker environment", "environmentname": { "fn::join": ["-", [{ "ref" : "aws::stackname"}, "workers"]] }, "templatename": { "ref": "workersconfigurationtemplate" }, "tier": { "name": "worker", "type": "sqs/http" }, "solutionstackname" : " bit amazon linux . v . . running ruby . (puma)" ... } } using our queue we can describe one of the cloudwatch::alarm resources and start describing a scaling policy: "scaleoutalarm" : { "type": "aws::cloudwatch::alarm", "properties": { "metricname": "approximatenumberofmessagesvisible", "namespace": "aws/sqs", "statistic": "average", "period": " ", "threshold": " ", "comparisonoperator": "greaterthanorequaltothreshold", "dimensions": [ { "name": "queuename", "value": { "fn::getatt" : ["defaultqueue", "queuename"] } } ], "evaluationperiods": " ", "alarmactions": [{ "ref" : "scaleoutpolicy" }] } }, "scaleoutpolicy" : { "type": "aws::autoscaling::scalingpolicy", "properties": { "adjustmenttype": "changeincapacity", "autoscalinggroupname": ????, "scalingadjustment": " ", "cooldown": " " } }, however, to connect the policy to the auto-scaling group, we need to know the name for the autoscaling group. unfortunately, the autoscaling group is abstracted behind the beanstalk environment. to gain access to it, we’ll need to create a custom resource backed by a lambda function to extract the information from the aws apis: "beanstalkstack": { "type": "custom::beanstalkstack", "properties": { "servicetoken": { "fn::getatt" : ["beanstalkstackoutputs", "arn"] }, "environmentname": { "ref": "workerenvironment" } } }, "beanstalkstackoutputs": { "type": "aws::lambda::function", "properties": { "code": { "zipfile": { "fn::join": ["\n", [ "var response = require('cfn-response');", "exports.handler = function(event, context) {", " console.log('request received:\\n', json.stringify(event));", " if (event.requesttype == 'delete') {", " response.send(event, context, response.success);", " return;", " }", " var environmentname = event.resourceproperties.environmentname;", " var responsedata = {};", " if (environmentname) {", " var aws = require('aws-sdk');", " var eb = new aws.elasticbeanstalk();", " eb.describeenvironmentresources({environmentname: environmentname}, function(err, data) {", " if (err) {", " responsedata = { error: 'describeenvironmentresources call failed' };", " console.log(responsedata.error + ':\\n', err);", " response.send(event, context, resource.failed, responsedata);", " } else {", " responsedata = { autoscalinggroupname: data.environmentresources.autoscalinggroups[ ].name };", " response.send(event, context, response.success, responsedata);", " }", " });", " } else {", " responsedata = {error: 'environment name not specified'};", " console.log(responsedata.error);", " response.send(event, context, response.failed, responsedata);", " }", "};" ]]} }, "handler": "index.handler", "runtime": "nodejs", "timeout": " ", "role": { "fn::getatt" : ["lambdaexecutionrole", "arn"] } } } with the custom resource, we can finally get access the autoscaling group name and complete the scaling policy: "scaleoutpolicy" : { "type": "aws::autoscaling::scalingpolicy", "properties": { "adjustmenttype": "changeincapacity", "autoscalinggroupname": { "fn::getatt": [ "beanstalkstack", "autoscalinggroupname" ] }, "scalingadjustment": " ", "cooldown": " " } }, the complete worker tier is part of our cloudformation stack: https://github.com/hybox/aws/blob/master/templates/worker.json mar , ldpath in examples at code lib , i gave a quick lightning talk on ldpath, a declarative domain-specific language for flatting linked data resources to a hash (e.g. for indexing to solr). ldpath can traverse the linked data cloud as easily as working with local resources and can cache remote resources for future access. the ldpath language is also (generally) implementation independent (java, ruby) and relatively easy to implement. the language also lends itself to integration within development environments (e.g. ldpath-angular-demo-app, with context-aware autocompletion and real-time responses). for me, working with the ldpath language and implementation was the first time that linked data moved from being a good idea to being a practical solution to some problems. here is a selection from the viaf record [ ]: <> void:indataset <../data> ; a genont:informationresource, foaf:document ; foaf:primarytopic <../ > . <../ > schema:alternatename "bittman, mark" ; schema:birthdate " - - " ; schema:familyname "bittman" ; schema:givenname "mark" ; schema:name "bittman, mark" ; schema:sameas , ; a schema:person ; rdfs:seealso <../ >, <../ >, <../ >, <../ >, <../ >, <../ > ; foaf:isprimarytopicof . we can use ldpath to extract the person’s name: so far, this is not so different from traditional approaches. but, if we look deeper in the response, we can see other resources, including books by the author. <../ > schema:creator <../ > ; schema:name "how to cook everything : simple recipes for great food" ; a schema:creativework . we can traverse the links to include the titles in our record: ldpath also gives us the ability to write this query using a reverse property selector, e.g: books = foaf:primarytopic / ^schema:creator[rdf:type is schema:creativework] / schema:name :: xsd:string ; the resource links out to some external resources, including a link to dbpedia. here is a selection from record in dbpedia: dbpedia-owl:abstract "mark bittman (born c. ) is an american food journalist, author, and columnist for the new york times."@en, "mark bittman est un auteur et chroniqueur culinaire américain. il a tenu une chronique hebdomadaire pour le the new york times, appelée the minimalist (« le minimaliste »), parue entre le septembre et le janvier . bittman continue d'écrire pour le new york times magazine, et participe à la section opinion du journal. il tient également un blog."@fr ; dbpedia-owl:birthdate " + : "^^ ; dbpprop:name "bittman, mark"@en ; dbpprop:shortdescription "american journalist, food writer"@en ; dc:description "american journalist, food writer", "american journalist, food writer"@en ; dcterms:subject , , , , , , ; ldpath allows us to transparently traverse that link, allowing us to extract the subjects for viaf record: [ ] if you’re playing along at home, note that, as of this writing, viaf.org fails to correctly implement content negotiation and returns html if it appears anywhere in the accept header, e.g.: curl -h "accept: application/rdf+xml, text/html; q= . " -v http://viaf.org/viaf/ / will return a text/html response. this may cause trouble for your linked data clients. mar , building a pivotal tracker irc bot with sinatra and cinch we're using pivotal tracker on the fedora futures project. we also have an irc channel where the tech team hangs out most of the day, and let each other know what we're working on, which tickets we're taking, and give each other feedback on those tickets. in order to document this, we try to put most of our the discussion in the tickets for future reference (although we are logging the irc channel, it's not nearly as easy to look up decisions there). because we're (lazy) developers, we wanted updates in pivotal to get surfaced in the irc channel. there was a (neglected) irc bot, pivotal-tracker-irc-bot, but it was designed to push and pull data from pivotal based on commands in irc (and, seems fairly abandoned). so, naturally, we built our own integration: pivotal-irc. this was my first time using cinch to build a bot, and it was a surprisingly pleasant and straightforward experience: bot = cinch::bot.new do configure do |c| c.nick = $nick c.server = $irc_server c.channels = [$channel] end end # launch the bot in a separate thread, because we're using this one for the webapp. thread.new { bot.start } and we have a really tiny sinatra app that can parse the pivotal webhooks payload and funnel it into the channel: post '/' do message = pivotal::webhookmessage.new request.body.read bot.channel_list.first.msg("#{message.description} #{message.story_url}") end it turns out we also send links to pivotal tickets not infrequently, and building two-way communication (using the pivotal rest api, and the handy pivotal-tracker gem) was also easy. cinch exposes a handy dsl that parses messages using regular expressions and capturing groups: bot.on :message, /story\/show\/([ - ]+)/ do |m, ticket_id| story = project.stories.find(ticket_id) m.reply "#{story.story_type}: #{story.name} (#{story.current_state}) / owner: #{story.owned_by}" end mar , real-time statistics with graphite, statsd, and gdash we have a graphite-based stack of real-time visualization tools, including the data aggregator statsd. these tools let us easily record real-time data from arbitrary services with mimimal fuss. we present some curated graphs through gdash, a simple sinatra front-end. for example, we record the time it takes for solr to respond to queries from our searchworks catalog, using this simple bash script: tail -f /var/log/tomcat /catalina.out | ruby solr_stats.rb (we rotate these logs through truncation; you can also use `tail -f --retry` for logs that are moved away when rotated) and the ruby script that does the actual parsing: require 'statsd.rb' statsd = statsd.new(..., ) # listen to stdin while str = gets if str =~ /qtime=([^ ]+)/ # extract the qtime ms = $ .to_i # record it, based on our hostname statsd.timing("#{env['hostname'].gsub('.', '-')}.solr.qtime", ms) end end from this data, we can start asking qustions like: is our load-balancer configured optimally? (hint: not quite; for a variety of reasons, we've sacrificed some marginal performance benefit for this non-invasive, simpler load-blaance configuration. why are our the th-percentile query times creeping up? (time in ms) (answers to these questions and more in a future post, i'm sure.) we also use this setup to monitor other services, e.g.: what's happening in our fedora instance (and, which services are using the repository)? note the red line ("warn_ ") in the top graph. it marks the point where our (asynchronous) indexing system is unable to keep up with demand, and updates may appear at a delay. given time (and sufficient data, of course), this also gives us the ability to forecast and plan for issues: is our solr query time getting worse? (ganglia can perform some basic manipulation, including taking integrals and derivatives) what is the rate of growth of our indexing backlog, and, can we process it in a reasonable timeframe, or should we scale the indexer service? given our rate of disk usage, are we on track to run out of disk space this month? this week? if we build graphs to monitor those conditions, we can add nagios alerts to trigger service alerts. gdash helpfully exposes a rest endpoint that lets us know if a service has those warn or critical thresholds. we currently have a home-grown system monitoring system that we're tempted to fold into here as well. i've been evaluating diamond, which seems to do a pretty good job of collecting granular system statistics (cpu, ram, io, disk space, etc). mar , icemelt: a stand-in for integration tests against aws glacier one of the threads we've been pursuing as part of the fedora futures project is integration with asynchronous and/or very slow storage. we've taken on aws glacier as a prime, generally accessable example. uploading content is slow, but can be done synchronously in one api request: post /:account_id/vaults/:vault_id/archives x-amz-archive-description: description ...request body (aka your content)... where things get radically different is when requesting content back. first, you let glacier know you'd like to retrieve your content: post /:account_id/vaults/:vault_id/jobs http/ . { "type": "archive-retrieval", "archiveid": string, [...] } then, you wait. and wait. and wait some more; from the documentation: most amazon glacier jobs take about four hours to complete. you must wait until the job output is ready for you to download. if you have either set a notification configuration on the vault identifying an amazon simple notification service (amazon sns) topic or specified an amazon sns topic when you initiated a job, amazon glacier sends a message to that topic after it completes the job. [emphasis added] icemelt if you're iterating on some code, waiting hours to get your content back isn't realistic. so, we wrote a quick sinatra app called icemelt in order to mock the glacier rest api (and, perhaps taking less time to code than retrieving content from glacier ). we've tested it using the ruby fog client, as well as the official aws java sdk, and it actually works! your content gets stored locally, and the delay for retrieving content is configurable (default: seconds). configuring the official sdk looks something like this: propertiescredentials credentials = new propertiescredentials( testicemeltglaciermock.class .getresourceasstream("awscredentials.properties")); amazonglacierclient client = new amazonglacierclient(credentials); client.setendpoint("http://localhost: /"); and for fog, something like: fog::aws::glacier.new :aws_access_key_id => '', :aws_secret_access_key => '', :scheme => 'http', :host => 'localhost', :port => ' ' right now, icemelt skips a lot of unnecessary work (e.g. checking hmac digests for authentication, validating hashes, etc), but, as always, patches are very welcome. next » nations buying as hackers sell flaws in computer code - the new york times sectionssearch skip to contentskip to site index europe today’s paper europe|nations buying as hackers sell flaws in computer code https://www.nytimes.com/ / / /world/europe/nations-buying-as-hackers-sell-computer-flaws.html advertisement continue reading the main story supported by continue reading the main story nations buying as hackers sell flaws in computer code luigi auriemma looks for flaws in computer codes that his customers can exploit.credit...gianni cipriano for the new york times by nicole perlroth and david e. sanger july , on the tiny mediterranean island of malta, two italian hackers have been searching for bugs — not the island’s many beetle varieties, but secret flaws in computer code that governments pay hundreds of thousands of dollars to learn about and exploit. the hackers, luigi auriemma, , and donato ferrante, , sell technical details of such vulnerabilities to countries that want to break into the computer systems of foreign adversaries. the two will not reveal the clients of their company, revuln, but big buyers of services like theirs include the national security agency — which seeks the flaws for america’s growing arsenal of cyberweapons — and american adversaries like the revolutionary guards of iran. all over the world, from south africa to south korea, business is booming in what hackers call “zero days,” the coding flaws in software like microsoft windows that can give a buyer unfettered access to a computer and any business, agency or individual dependent on one. just a few years ago, hackers like mr. auriemma and mr. ferrante would have sold the knowledge of coding flaws to companies like microsoft and apple, which would fix them. last month, microsoft sharply increased the amount it was willing to pay for such flaws, raising its top offer to $ , . but increasingly the businesses are being outbid by countries with the goal of exploiting the flaws in pursuit of the kind of success, albeit temporary, that the united states and israel achieved three summers ago when they attacked iran’s nuclear enrichment program with a computer worm that became known as “stuxnet.” the flaws get their name from the fact that once discovered, “zero days” exist for the user of the computer system to fix them before hackers can take advantage of the vulnerability. a “zero-day exploit” occurs when hackers or governments strike by using the flaw before anyone else knows it exists, like a burglar who finds, after months of probing, that there is a previously undiscovered way to break into a house without sounding an alarm. “governments are starting to say, ‘in order to best protect my country, i need to find vulnerabilities in other countries,’ ” said howard schmidt, a former white house cybersecurity coordinator. “the problem is that we all fundamentally become less secure.” a zero-day bug could be as simple as a hacker’s discovering an online account that asks for a password but does not actually require typing one to get in. bypassing the system by hitting the “enter” key becomes a zero-day exploit. the average attack persists for almost a year — days — before it is detected, according to symantec, the maker of antivirus software. until then it can be exploited or “weaponized” by both criminals and governments to spy on, steal from or attack their target. ten years ago, hackers would hand knowledge of such flaws to microsoft and google free, in exchange for a t-shirt or perhaps for an honorable mention on a company’s web site. even today, so-called patriotic hackers in china regularly hand over the information to the government. now, the market for information about computer vulnerabilities has turned into a gold rush. disclosures by edward j. snowden, the former n.s.a. consultant who leaked classified documents, made it clear that the united states is among the buyers of programming flaws. but it is hardly alone. israel, britain, russia, india and brazil are some of the biggest spenders. north korea is in the market, as are some middle eastern intelligence services. countries in the asian pacific, including malaysia and singapore, are buying, too, according to the center for strategic and international studies in washington. to connect sellers and buyers, dozens of well-connected brokers now market information on the flaws in exchange for a percent cut. some hackers get a deal collecting royalty fees for every month their flaw is not discovered, according to several people involved in the market. some individual brokers, like one in bangkok who goes by “the grugq” on twitter, are well known. but after the grugq spoke to forbes last year, his business took a hit from the publicity, according to a person familiar with the impact, primarily because buyers demand confidentiality. a broker’s approach need not be subtle. “need code execution exploit urgent,” read the subject line of an e-mail sent from one contractor’s intermediary last year to billy rios, a former security engineer at microsoft and google who is now a director at cylance, a security start-up. “dear friend,” the e-mail began. “do you have any code execution exploit for windows , mac, for applications like browser, office, adobe, swf any.” “if yes,” the e-mail continued, “payment is not an issue.” for start-ups eager to displace more established military contractors, selling vulnerabilities — and expertise about how to use them — has become a lucrative opportunity. firms like vupen in montpellier, france; netragard in acton, mass.; exodus intelligence in austin, tex.; and revuln, mr. auriemma’s and mr. ferrante’s maltese firm, freely advertise that they sell knowledge of the flaws for cyberespionage and in some cases for cyberweapons. outside washington, a virginia start-up named endgame — in which a former director of the n.s.a. is playing a major role — is more elusive about its abilities. but it has developed a number of tools that it sells primarily to the united states government to discover vulnerabilities, which can be used for fighting cyberespionage and for offensive purposes. like revuln, none of the companies will disclose the names of their customers. but adriel desautels, the founder of netragard, said that his clients were “strictly u.s. based” and that netragard’s “exploit acquisition program” had doubled in size in the past three years. the average flaw now sells from around $ , to $ , . image donato ferrante, a partner in the business. such ventures are booming worldwide.credit...ryan enn hughes for the new york times chaouki bekrar, the founder of vupen, said his company did not sell to countries that are “subject to european union, united states or united nations restrictions or embargoes.” he also said revenue was doubling every year as demand surged. vupen charges customers an annual $ , subscription fee to shop through its catalog, and then charges per sale. costs depend on the sophistication of the vulnerability and the pervasiveness of the operating system. revuln specializes in finding remote vulnerabilities in industrial control systems that can be used to access — or disrupt — water treatment facilities, oil and gas pipelines and power plants. “they are engaging in willful blindness,” said christopher soghoian, a senior policy analyst at the american civil liberties union. many technology companies have started “bug bounty” programs in which they pay hackers to tell them about bugs in their systems rather than have the hackers keep the flaws to themselves — or worse, sell them on the black market. nearly a decade ago the mozilla foundation started one of the first bounty programs to pay for bugs in its firefox browser. since then, google, facebook and paypal have all followed suit. in recent months, bounties have soared. in , google started paying hackers up to $ , . — the number is hacker code for “elite” — for bugs in its web browser chrome. last month, google increased its cash prize to $ , for flaws found in some of its widely used products. facebook began a similar program in and has since paid out $ million. (one payout included $ , to a -year-old. the most it has paid for a single bug is $ , .) “the program undermines the incentive to hold on to a bug that might be worth nothing in a day,” said joe sullivan, facebook’s chief security officer. it had also had the unintended effect of encouraging ethical hackers to turn in others who planned to use its bugs for malicious use. “we’ve seen people back-stab other hackers by ratting out a bug that another person planned to use maliciously,” he said. microsoft, which had long resisted such a program, did an about-face last month when it announced that it would pay hackers as much as $ , for information about a single flaw, if they also provided a way to defend against it. apple still has no such program, but its vulnerabilities are some of the most coveted. in one case, a zero-day exploit in apple’s ios operating system sold for $ , , according to two people briefed on the sale. still, said mr. soghoian of the a.c.l.u., “the bounties pale in comparison to what the government pays.” the military establishment, he said, “created frankenstein by feeding the market.” in many ways, the united states government created the market. when the united states and israel used a series of flaws — including one in a windows font program — to unleash what became known as the stuxnet worm, a sophisticated cyberweapon used to temporarily cripple iran’s ability to enrich uranium, it showed the world what was possible. it also became a catalyst for a cyberarms race. when the stuxnet code leaked out of the natanz nuclear enrichment plant in iran in the summer of , the flaws suddenly took on new value. subsequent discoveries of sophisticated state-sponsored computer viruses named flame and duqu that used flaws to spy on computers in iran have only fueled interest. “i think it is fair to say that no one anticipated where this was going,” said one person who was involved in the early american and israeli strategy. “and today, no one is sure where it is going to end up.” in a prescient paper in , charlie miller, a former n.s.a. employee, described the profitable alternatives for hackers who may have otherwise turned their information about flaws over to the vendor free, or sold it for a few thousand dollars to programs like tipping point’s zero day initiative, now run by hewlett-packard, which used them to enhance their security research. he described how one american government agency offered him $ , for a linux bug. he asked another for $ , , which agreed “too quickly,” mr. miller wrote. “i had probably not asked for enough.” because the bug did not work with a particular flavor of linux, mr. miller eventually sold it for $ , . but the take-away for him and his fellow hackers was clear: there was serious money to be made selling the flaws. at their conventions, hackers started flashing signs that read, “no more free bugs.” hackers like mr. auriemma, who once gave away their bugs to software vendors and antivirus makers, now sound like union organizers declaring their rights. “providing professional work for free to a vendor is unethical,” mr. auriemma said. “providing professional work almost for free to security companies that make their business with your research is even more unethical.” experts say there is limited incentive to regulate a market in which government agencies are some of the biggest participants.  “if you try to limit who you do business with, there’s the possibility you will get shut out,” mr. schmidt said. “if someone comes to you with a bug that could affect millions of devices and says, ‘you would be the only one to have this if you pay my fee,’ there will always be someone inclined to pay it.” “unfortunately,” he said, “dancing with the devil in cyberspace has been pretty common.” advertisement continue reading the main story site index site information navigation ©   the new york times company nytco contact us accessibility work with us advertise t brand studio your ad choices privacy policy terms of service terms of sale site map canada international help subscriptions d-lib magazine an electronic publication with a primary focus on digital library research and development. http://www.dlib.org/ d-lib magazine https://doi.org/ . /dlib.magazine d-lib magazine ceased publishing new issues in july . this rss feed will no longer be updated. warning: "continue" targeting switch is equivalent to "break". did you mean to use "continue "? in /kunden/ _ /jakoblog.de/wp/wp-content/plugins/mendeleyplugin/wp-mendeley.php on line jakoblog — das weblog von jakob voß blog about erster expliziter entwurf einer digitalen bibliothek ( ) . märz um : kommentare ich recherchiere (mal wieder) zu digitalen bibliotheken und habe mich gefragt, wann der begriff zum ersten mal verwendet wurde. laut google books taucht (nach aussortieren falsch-positiver treffer) „digital library“ erstmals in einem bericht für das us-außenministerium auf. die bibliographischen daten habe ich bei wikidata eingetragen. der bericht „the need for fundamental research in seismology“ wurde damals erstellt um zu untersuchen wie mit seismischen wellen atomwaffentests erkannt werden können. in anhang legte john gerrard, einer von vierzehn an der studie beteiligten wissenschaftler, auf zwei seiten den bedarf an einem rechenzentrum mit einem ibm rechner dar. da das us-regierungsdokument gemeinfrei ist hier die entsprechenden seiten: bei der geplanten digitalen bibliothek handelt es sich um eine sammlung von forschungsdaten mitsamt wissenschaftlicher software um aus den forschungsdaten neue erkenntnisse zu gewinnen: the following facilities should be available: a computer equivalent to the ibm series, plus necessary peripheral equipment. facilities for converting standard seismograms into digital form. a library of records of earthquakes and explosions in form suitable for machine analysis. a (growing) library of basic programs which have proven useful in investigations of seismic disturbances and related phenomena. … klingt doch ziemlich aktuell, oder? gefallen hat mir auch die beschreibung des rechenzentrums als „open shop“ und der hinweis „nothing can dampen enthusiasm for new ideas quite as effectively as long periods of waiting time“. die bezeichnung „digital library“ bezieht sich in dem text primär auf die sammlung von digitalisierten seimsmogrammen. am ende der empfehlung wird abweichend der begriff „digitized library“ verwendet. dies spricht dafür dass beide begriffe synonym verwendet wurden. interessanterweise bezieht sich „library“ aber auch auf die sammlung von computerprogrammen. ob das empfohlene rechenzentrum mit digitaler bibliothek realisiert wurde konnte ich leider nicht herausfinden (vermutlich nicht). zum autor dr. john gerrard ist mir nicht viel mehr bekannt als dass er als director of data systems and earth science research bei texas instruments (ti) arbeitete. ti wurde als „geophysical service incorporated“ zur seismischen erkundung von erdöllagerstätten gegründet und bekam den regierungsauftrag zur Überwachung von kernwaffentests (projekt vela uniform). an gerrard erinnert sich in diesem interview ein ehemaliger kollege: john gerrard: into digital seismology, and he could see a little bit of the future of digital processing and he talked about how that could be effective in seismology, he was right that this would be important in seismology in birmingham gibt es einen geologen gleichen namens, der ist aber erst geboren. ich vermute dass gerrard bei ti an der entwicklung des texas instruments automatic computer (tiac) beteiligt war, der speziell zur digitalen verarbeitung seismischer daten entwickelt wurde. der einsatz von computern in klassischen bibliotheken kam übrigens erst mit der nächsten rechner-generation: das marc-format wurde in den ern mit dem ibm system/ entwickelt (von henriette avram, die zuvor bei der nsa auch mit ibm gearbeitet hatte). davor gabe es den fiktiven bibliotheks-computer emmarac (angelehnt an eniac und univac) in „eine frau, die alles weiß“ mit katharine hepburn als bibliothekarin und spencer tracy als computervertreter. bis ende der er taucht der begriff „digital library“ bei google books übrigens nur vereinzelt auf. tags: digital library, geschichte kommentare data models age like parents . märz um : keine kommentare denny vrandečić, employed as ontologist at google, noticed that all six of of six linked data applications linked to years ago (iwb, tabulator, disko, marbles, rdfbrowser , and zitgist) have disappeared or changed their calling syntax. this reminded me at a proverb about software and data: software ages like fish, data ages like wine. ‏ the original form of this saying seems to come from james governor (@monkchips) who in derived it from from an earlier phrase: hardware is like fish, operating systems are like wine. the analogy of fishy applications and delightful data has been repeated and explained and criticized several times. i fully agree with the part about software rot but i doubt that data actually ages like wine (i’d prefer whisky anyway). a more accurate simile may be „data ages like things you put into your crowded cellar and then forget about“. thinking a lot about data i found that data is less interesting than the structures and rules that shape and restrict data: data models, ontologies, schemas, forms etc. how do they age compared with software and data? i soon realized: data models age like parents. first they guide you, give good advise, and support you as best as they can. but at some point data begin to rebel against their models. sooner or later parents become uncool, disconnected from current trends, outdated or even embarrassing. eventually you have to accept their quaint peculiarities and live your own life. that’s how standards proliferate. both ontologies and parents ultimately become weaker and need support. and in the end you have to let them go, sadly looking back. (the analogy could further be extended, for instance data models might be frustrated confronted by how actual data compares to their ideals, but that’s another story) tags: data modeling keine kommentare in memoriam ingetraut dahlberg . oktober um : kommentare die informationswissenschaftlerin ingetraut dahlberg, bekannt unter anderem als gründerin der international society for knowledge organization (isko), ist letzte woche im alter von jahren verstorben. meine erste reaktion nach einem angemessenen bedauern war es in wikipedia und in wikidata das sterbedatum einzutragen, was jedoch schon andere erledigt hatten. also stöberte ich etwas im lebenslauf, und legte stattdessen wikidata-items zum mcluhan institute for digital culture and knowledge organization an, dem dahlberg schon zu lebzeiten ihre bibliothek vermacht hat, das aber bereits wieder geschlossen wurde. der ehemalige direktor kim veltman betreibt noch eine webseite zum institut und nennt in seinen memoiren ingetraut dahlberg, douglas engelbart, ted nelson und tim berners lee in einem atemzug. das sollte eigentlich grund genug sein, mich mit der frau zu beschäftigen. wenn ich ehrlich bin war mein verhältnis zu ingetraut dahlberg allerdings eher ein distanziert-ignorantes. ich wusste um ihre bedeutung in der „wissensorganisation-szene“, der ich zwangsläufig auch angehöre, bin ihr aber nur ein oder zwei mal auf isko-tagungen begegnet und hatte auch nie interesse daran mich mehr mit ihr auseinanderzusetzen. als „junger wilder“ schien sie mir immer wie eine person, deren zeit schon lange vorbei ist und deren beiträge hoffnungslos veraltet sind. dass alte ideen auch im rahmen der wissensorganisation keineswegs uninteressant und irrelevant sind, sollte mir durch die beschäftigung mit ted nelson und paul otlet eigentlich klar sein; irgendwie habe ich aber bisher nie einen anknüpfungspunkt zu dahlbergs werk gefunden. wenn ich zurückblicke muss der auslöser für meine ignoranz in meiner ersten begegnung mit vertreter*innen der wissensorganisation auf einer isko-tagung anfang der er jahre liegen: ich war damals noch frischer student der bibliotheks- und informationswissenschaft mit informatik-hintergrund und fand überall spannende themen wie wikipedia, social tagging und ontologien, die prinzipiell alle etwas mit wissensorganisation zu tun hatten. bei der isko fand ich dagegen nichts davon. das internet schien jedenfalls noch sehr weit weg. erschreckend fand ich dabei weniger das fehlen inhaltlicher auseinandersetzung mit den damals neuesten entwicklungen im netz sondern die formale fremdheit: mehrere der beteiligten wissenschaftler*innen hatten nach meiner erinnerung nicht einmal eine email-adresse. menschen, die sich anfang der er jahre ohne email mit information und wissen beschäftigten konnte ich einfach nicht ernst nehmen. so war die isko in meiner ignoranz lange ein relikt, das ähnlich wie die international federation for information and documentation (fid, warum haben die sich eigentlich nicht zusammengetan?) auf tragische weise von der technischen entwicklung überholt wurde. und ingetraut dahlberg stand für mich exemplarisch für dieses ganze scheitern einer zunft. inzwischen sehe ich es etwas differenzierter und bin froh teil dieser kleinen aber feinen fachcommunity zu sein (und wenn die isko endlich auf open access umstellt, werde ich auch meinen publikations-boycott aufgeben). in jedem fall habe ich ingetraut dahlberg unrecht getan und hoffe auf differenziertere auseinandersetzungen mit ihrem werk. tags: nachruf kommentare wikidata documentation on the hackathon in vienna . mai um : kommentare at wikimedia hackathon , a couple of volunteers sat together to work on the help pages of wikidata. as part of that wikidata documentation sprint. ziko and me took a look at the wikidata glossary. we identified several shortcomings and made a list of rules how the glossary should look like. the result are the glossary guidelines. where the old glossary partly replicated wikidata:introduction, the new version aims to allow quick lookup of concepts. we already rewrote some entries of the glossary according to these guidelines but several entries are outdated and need to be improved still. we changed the structure of the glossary into a sortable table so it can be displayed as alphabetical list in all languages. the entries can still be translated with the translation system (it took some time to get familiar with this feature). we also created some missing help pages such as help:wikimedia and help:wikibase to explain general concepts with regard to wikidata. some of these concepts are already explained elsewhere but wikidata needs at least short introductions especially written for wikidata users. image taken by andrew lih (cc-by-sa) tags: wikidata, wmhack kommentare introduction to phabricator at wikimedia hackathon . mai um : kommentar this weekend i participate at wikimedia hackathon in vienna. i mostly contribute to wikidata related events and practice the phrase "long time no see", but i also look into some introductionary talks. in the late afternoon of day one i attended an introduction to phabricator project management tool given by andré klapper. phabricator was introduced in wikimedia foundation about three years ago to replace and unify bugzilla and several other management tools. phabricator is much more than an issue tracker for software projects (although it is mainly used for this purpose by wikimedia developers). in summary there are tasks, projects, and teams. tasks can be tagged, assigned, followed,discussed, and organized with milestones and workboards. the latter are kanban-boards like those i know from trello, waffle, and github project boards. phabricator is open source so you can self-host it and add your own user management without having to pay for each new user and feature (i am looking at you, jira). internally i would like to use phabricator but for fully open projects i don’t see enough benefit compared to using github. p.s.: wikimedia hackathon is also organized with phabricator. there is also a task for blogging about the event. tags: wikimedia, wmhack kommentar some thoughts on iiif and metadata . mai um : keine kommentare yesterday at dini ag kim workshop i martin baumgartner and stefanie rühle gave an introduction to the international image interoperability framework (iiif) with focus on metadata. i already knew that iiif is a great technology for providing access to (especially large) images but i had not have a detailed look yet. the main part of iiif is its image api and i hope that all major media repositories (i am looking at you, wikimedia commons) will implement it. in addition the iiif community has defined a „presentation api“, a „search api“, and an „authentication api“. i understand the need of such additional apis within the iiif community, but i doubt that solving the underlying problems with their own standards (instead of reusing existing standards) is the right way to go. standards should better „do one thing and do it well“ (unix philosophy). if images are the „one thing“ of iiif, then search and authentication are different matter. in the workshop we only looked at parts of the presentation api to see where metadata (creator, dates, places, provenance etc. and structural metadata such as lists and hierarchies) could be integrated into iiif. such metadata is already expressed in many other formats such as mets/mods and tei so the question is not whether to use iiif or other metadata standards but how to connect iiif with existing metadata standards. a quick look at the presentation api surprised me to find out that the metadata element is explicitly not intended for additional metadata but only „to be displayed to the user“. the element contains an ordered list of key-value pairs that „might be used to convey the author of the work, information about its creation, a brief physical description, or ownership information, amongst other use cases“. at the same time the standard emphasizes that „there are no semantics conveyed by this information“. hello, mcfly? without semantics conveyed it isn’t information! in particular there is no such thing as structured data (e.g. a list of key-value pairs) without semantics. i think the design of field metadata in iiif is based on a common misconception about the nature of (meta)data, which i already wrote about elsewhere (sorry, german article – some background in my phd and found by ballsun-stanton). in a short discussion at twitter rob sanderson (getty) pointed out that the data format of iiif presentation api to describe intellectual works (called a manifest) is expressed in json-ld, so it can be extended by other rdf statements. for instance the field „license“ is already defined with dcterms:rights. addition of a field „author“ for dcterms:creator only requires to define this field in the json-ld @context of a manifest. after some experimenting i found a possible way to connect the „meaningless“ metadata field with json-ld fields: { "@context": [ "http://iiif.io/api/presentation/ /context.json", { "author": "http://purl.org/dc/terms/creator", "bibo": "http://purl.org/ontology/bibo/" } ], "@id": "http://example.org/iiif/book /manifest", "@type": ["sc:manifest", "bibo:book"], "metadata": [ { "label": "author", "property": "http://purl.org/dc/terms/creator", "value": "allen smithee" }, { "label": "license", "property": "http://purl.org/dc/terms/license", "value": "cc-by . " } ], "license": "http://creativecommons.org/licenses/by/ . /", "author": { "@id": "http://www.wikidata.org/entity/q ", "label": "allen smithee" } } this solution requires an additional element property in the iiif specification to connect a metadata field with its meaning. iiif applications could then enrich the display of metadata fields for instance with links or additional translations. in json-ld some names such as „cc-by . “ and „allen smithee“ need to be given twice, but this is ok because normal names (in contrast to field names such as „author“ and „license“) don’t have semantics. tags: iiif, metadata keine kommentare ersatzteile aus dem d-drucker . dezember um : kommentare krach, zack, bumm! da liegt die jalousie unten. ein kleinen plastikteil ist abgebrochen, das wäre doch ein prima anwendungsfall für einen d-drucker, oder? schön länger spiele ich mit dem gedanken, einen d-drucker anzuschaffen, kann aber nicht so recht sagen, wozu eigentlich. die herstellung von ersatzteilen aus dem d-drucker scheint mir allerdings eher so ein versprechen zu sein wie der intelligente kühlschrank: theoretisch ganz toll aber nicht wirklich praktisch. es würde mich vermutlich stunden kosten, das passende teil auf diversen plattformen wie thingiverse zu finden oder es mit cad selber zu konstruieren. ohne verlässliche d-modelle bringt also der beste d-drucker nichts, deshalb sind die geräte auch nur ein teil der lösung zur herstellung von ersatzteilen. ich bezweifle sehr dass in naher zukunft hersteller d-modelle ihrer produkte zum download anbieten werden, es sei denn es handelt sich um open hardware. abgesehen von elektronischen bastelprojekten ist das angebot von open-hardware-produkten für den hausgebrauch aber noch sehr überschaubar. dennoch denke ich, dass open hardware, das heisst produkte deren baupläne frei lizensiert zur kostenlosen verfügung stehen, sowie standardisierte bauteile das einzig richtige für den einsatz von d-druckern im hausgebrauch sind. ich werde das problem mit der kaputten jalousie erstmal mit analoger technik angehen und schauen, was ich so an passenden materialien und werkzeugen herumliegen habe. vielleicht hilft ja gaffer tape? tags: d-drucker, maker, open hardware kommentare einfachste projekthomepage bei github . september um : kommentar die einfachste form einer projekthomepage bei github pages besteht aus einer startseite, die lediglich auf das repository verweist. lokal lässt sich eine solche seite so angelegen: . erstellung des neuen, leeren branch gh-pages: git checkout --orphan gh-pages git rm -rf . . anlegen der datei index.md mit folgendem inhalt: --- --- # {{site.github.project_title}} [{{site.github.repository_url}}]({{site.github.repository_url}}#readme). . hinzufügen der datei und push nach github git add index.md git commit -m "homepage" git push origin gh-pages tags: github kommentar abbreviated uris with rdfns . september um : kommentare working with rdf and uris can be annoying because uris such as „http://purl.org/dc/elements/ . /title“ are long and difficult to remember and type. most rdf serializations make use of namespace prefixes to abbreviate uris, for instance „dc“ is frequently used to abbreviate „http://purl.org/dc/elements/ . /“ so „http://purl.org/dc/elements/ . /title“ can be written as qualified name „dc:title„. this simplifies working with uris, but someone still has to remember mappings between prefixes and namespaces. luckily there is a registry of common mappings at prefix.cc. a few years ago i created the simple command line tool rdfns and a perl library to look up uri namespace/prefix mappings. meanwhile the program is also available as debian and ubuntu package librdf-ns-perl. the newest version (not included in debian yet) also supports reverse lookup to abbreviate an uri to a qualified name. features of rdfns include: look up namespaces (as rdf/turtle, rdf/xml, sparql…) $ rdfns foaf.ttl foaf.xmlns dbpedia.sparql foaf.json @prefix foaf: . xmlns:foaf="http://xmlns.com/foaf/ . /" prefix dbpedia: "foaf": "http://xmlns.com/foaf/ . /" expand a qualified name $ rdfns dc:title http://purl.org/dc/elements/ . /title lookup a preferred prefix $ rdfns http://www.w .org/ / /geo/wgs _pos# geo create a short qualified name of an url $ rdfns http://purl.org/dc/elements/ . /title dc:title i use rdf-ns for all rdf processing to improve readability and to avoid typing long uris. for instance catmandu::rdf can be used to parse rdf into a very concise data structure: $ catmandu convert rdf --file rdfdata.ttl to yaml tags: perl, rdf kommentare das wissen der welt . august um : kommentare denny vrandečić, einer der köpfe hinter semantic mediawiki und wikidata, hat eine clevere metrik vorgeschlagen um den erfolg der wikimedia-projekte zu messen. die tätigkeit und damit das ziel der wikimedia-foundation wurde von jimbo wales so ausgedrückt: imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. that’s what we’re doing. in wikiquote wird dieser bekannte ausspruch momentan folgendermaßen übersetzt: „stell dir eine welt vor, in der jeder mensch auf der erde freien zugang zum gesamten menschlichem wissen hat. das ist, was wir machen.“ wie lässt sich nun aber quantifizieren, zu welchem grad das ziel erreicht ist? so wie ich es verstanden (und in meine worte übersetzt) habe, schlägt denny folgendes vor: für jedem menschen auf der welt gibt es theoretisch eine zahl zwischen null und eins, die angibt wieviel vom gesamten wissens der welt („the sum of all human knowledge“) diesem menschen durch wikimedia-inhalte zugänglich ist. der wert lässt sich als prozentzahl des zugänglichen weltwissens interpretieren – da sich wissen aber kaum so einfach messen und vergleichen lässt, ist diese interpretation problematisch. der wert von eins ist utopisch, da wikipedia & co nicht alles wissen der welt enthält. für menschen ohne internet-zugang kann der wert aber bei null liegen. selbst mit zugang zu wikipedia ist die zahl bei jedem menschen eine andere, da nicht alle inhalte in allen sprachen vorhanden sind und weil viele inhalte ohne vorwissen unverständlich und somit praktisch nicht zugänglich sind. die zahlen der individuellen zugänglichkeit des weltwissens lassen sich nun geordnet in ein diagram eintragen, das von links (maximales wissen) nach rechts (kein wissen durch zugänglich) alle menschen aufführt. wie denny an folgendem bild ausführt, kann die wikimedia-community ihrem weg auf verschiedenen wegen näher kommen: ( ) der ausbau von vielen artikeln in einem komplexen spezialgebiet oder einer kleinen sprache kommt nur wenigen menschen zu gute. ( ) stattdessen könnten auch die wichtigsten artikel bzw. themen in sprachen verbessert und ergänzt werden, welche von vielen menschen verstanden werden. ( ) schließlich kann wikimedia auch dafür sorgen, dass mehr menschen einen zugang zu den wikimedia-ihren inhalten bekommen – zum beispiel durch initiativen wie wikipedia zero ich halte die von denny vorgeschlagene darstellung für hilfreich um über das einfache zählen von wikipedia-artikeln hinauszukommen. wie er allerdings selber zugibt, gibt es zahlreiche offene fragen da sich die tatsächlichen zahlen der verfügbarkeit von wissen nicht einfach ermitteln lassen. meiner meinung nach liegt ein grundproblem darin, dass sich wissen – und vor allem das gesamte wissen der menschheit – nicht quantifizieren lässt. es ist auch irreführend davon auszugehen, dass die wikimedia-produkte wissen sammeln oder enthalten. möglicherweise ist dieser irrtum für die metrik egal, nicht aber für das was eigentlich gemessen werden soll (zugänglichkeit des wissens der welt). falls wikimedia an einem unverstelltem blick auf die frage interessiert ist, wieviel des wissens der menschheit durch ihre angebote den menschen zugänglich gemacht wird, könnte es helfen mal einige philosophen und philosophinnen zu fragen. ganz im ernst. mag sein (und so vermute ich mit meinem abgebrochenen philosophie-studium), dass am ende lediglich deutlich wird, warum dass ganze wikimedia-projekt nicht zu realisieren ist; selbst erkenntnisse über mögliche gründe dieses scheitern wären aber hilfreich. vermutlich ist es aber zu verpönt, philosophen ernsthaft um rat zu fragen oder die verbliebenen philosophen beschäftigen sich lieber mit anderen fragen. p.s: eine weitere relevante disziplin zur beantwortung der frage wieviel wissen der welt durch wikipedia & co der menschheit zugänglich gemacht wird, ist die pädagogik, aber da kenne ich mich noch weniger aus als mit der philosophie. tags: freie inhalte, wikipedia, wissensordnung kommentare nächste seite » neueste beiträge erster expliziter entwurf einer digitalen bibliothek ( ) data models age like parents in memoriam ingetraut dahlberg wikidata documentation on the hackathon in vienna introduction to phabricator at wikimedia hackathon neueste kommentare confessions de rousseau dissertation bei is data a language? in search of the new discipline data linguistics unternehmensfinanzierung bei is data a language? in search of the new discipline data linguistics подробности... bei erster expliziter entwurf einer digitalen bibliothek ( ) ayam s bei ersatzteile aus dem d-drucker will taking dht at increase penis size bei abbreviated uris with rdfns themen api archivierung atom bibliothek bibliothekswissenschaft bibsonomy daia data modeling digital library feed freie inhalte gbv humor identifier katalog katalog . librarything literatur mashup medien metadata microformats musik oai open access openstreetmap perl pica politik rdf seealso semantic web soa software standards suchmaschine tagging veranstaltung web . webservices widget wikimedia wikipedia wikis Überwachungsstaat blogroll planet biblioblog . planet code lib planet wikimedia (de) feeds siehe auch powered by wordpress with theme based on pool theme and silk icons. entries and comments feeds. valid xhtml and css. ^top^ in the library with the lead pipe in the library with the lead pipe an open access, peer reviewed journal dismantling the evaluation framework (atharva tulsi, unsplash, https://unsplash.com/photos/rvpcatjhyua) by alaina c. bull, margy macmillan, and alison j. head in brief for almost years, instruction librarians have relied on variations of two models, the craap test and sift, to teach students how to evaluate printed and web-based materials. dramatic changes to the information ecosystem, however, present new challenges amid... read more service ceiling: the high cost of professional development for academic librarians by bridgette comanda, jaci wilkinson, faith bradham, amanda koziura, and maura seale in brief academic librarian salaries are shrinking, but conferences and professional membership fees are increasing. how is this impacting our field and our colleagues? during early , we fielded a national survey of academic librarians about their professional development and service costs that... read more equitable but not diverse: universal design for learning is not enough by amanda roth, gayatri singh (posthumous), and dominique turnbow in brief information literacy instruction is increasingly being delivered online, particularly through the use of learning objects. the development practice for creating learning objects often uses the universal design for learning (udl) framework to meet needs for inclusivity. however, missing from this framework is the lens... read more ethical financial stewardship: one library’s examination of vendors’ business practices by katy divittorio and lorelle gianelli in brief the evaluation of library collections rarely digs into the practices or other business ventures of the companies that create or sell library resources. as financial stewards, academic acquisition librarians are in a unique position to consider the business philosophy and practices of our vendors as they align... read more we need to talk about how we talk about disability: a critical quasi-systematic review by amelia gibson, kristen bowen, and dana hanson in brief this quasi-systematic review uses a critical disability framework to assess definitions of disability, use of critical disability approaches, and hierarchies of credibility in lis research between and . we present quantitative and qualitative findings about trends and gaps in the research, and discuss the... read more culturally responsive community engagement programming and the university library: lessons learned from half a decade of vtditc by craig e. arthur, dr. freddy paige, la’ portia perkins, jasmine weiss, and dr. michael williams (good homie signs’ “hip hop @ vt” mural / ) in brief vtditc: hip hop studies at virginia tech is an award-winning series of experiential learning-focused, culturally responsive community engagement programs. it is deeply rooted in hip hop culture and... read more creating a student-centered alternative to research guides: developing the infrastructure to support novice learners in brief: research and course guides typically feature long lists of resources without the contextual or instructional framework to direct novice researchers through the research process. an investigation of guide usage and user interactions at a large university in the southwestern u.s. revealed a need to reexamine the way research guides can be developed and... read more power and status (and lack thereof) in academe: academic freedom and academic librarians in brief academic librarians do not experience full academic freedom protections, despite the fact that they are expected to exercise independent judgment, be civically engaged, and practice applied scholarship. academic freedom for academic librarians is not widely studied or well understood. to learn more, we conducted a survey which received over responses from academic... read more the library commons: an imagination and an invocation by jennie rose halperin in brief commons theory can provide important interventions within neoliberal managerial information capitalism when applied to the library as an institution. the commons and its associated practices provide a model of abundance, sharing, and cooperation. libraries can and should participate in alternative economic and management models to create an inclusive vision... read more “information has value”: the political economy of information capitalism in brief information capitalism dominates the production and flow of information across the globe. it produces massive information institutions that are as harmful to everyday people as they are powerful. to this point, information literacy (il) educators do not have a theory and pedagogy of information capitalism. this article appraises the current state of political... read more none none on the instability of bitcoin without the block reward miles carlsten carlsten@cs.princeton.edu harry kalodner kalodner@cs.princeton.edu s. matthew weinberg smweinberg@princeton.edu arvind narayanan arvindn@cs.princeton.edu abstract bitcoin provides two incentives for miners: block rewards and transaction fees. the former accounts for the vast ma- jority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. there has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. we show that this is not the case. our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein. we show that this results in an equilibrium with undesirable properties for bitcoin’s security and performance, and even non-equilibria in some circumstances. we also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. our results are derived from theoretical analysis and confirmed by a new bitcoin mining simulator that may be of independent interest. we discuss the troubling implications of our results for bitcoin’s future security and draw lessons for the design of new cryptocurrencies. . introduction the security of bitcoin’s consensus protocol relies on min- ers behaving correctly. they are incentivized to do so via mining revenues under the assumption that they are ratio- nal entities. any deviant miner behavior that outperforms the default is thus a serious threat to the security of bitcoin. miners receive two types of revenue: block rewards and transaction fees. the former account for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle (specifically, they halve every four years). there has been an unexamined belief that in terms of the security of the block chain (including incentives of the mining game), it is immaterial whether miners receive (say) bitcoins in each block as a block reward or bitcoins in expectation as transaction fees. illustrative example (figure ). imagine a popula- this is an extended version of our paper that appeared at acm ccs . some of the figures have been updated with more accurate versions due to improvements to our simulator. figure : one possible state of the block chain and two possible actions a miner could take. tion of rational, self-interested miners. consider a block chain with blocks of exponentially distributed rewards, as we expect when the fixed block reward runs out. a miner has numerous options to consider when mining, but let’s fo- cus on just two possibilities. she could extend the longest chain (option one), obtaining a reward of and leaving a reward of for the next miner (at least until more transac- tions arrive). alternatively, she could fork it (option two), obtaining reward of while leaving a reward of bitcoin unclaimed. the bitcoin protocol dictates option one, but a quick reasoning suggests that option two is better. to reason about this correctly, we must consider which strategies the other miners are using. for instance, if all other miners follow the heuristic of mining on the block they heard about first in the case of a -block fork (and if there is no latency in the network), then forking is ineffective, and option one is clearly superior. on the other hand, since other miners are rational, perhaps they will choose to build on the fork instead of the older block, in which case option two would yield more rewards. examples like these reveal novel incentive issues that sim- ply don’t arise when block rewards are fixed. the goal of this paper is to understand the potential impact on bitcoin’s sta- bility by investigating the mining game in the regime where the block reward has dwindled to a negligible amount, and transaction fees dominate mining rewards. we find new and surprising incentive issues in a transaction-fee regime, even assuming that transactions (and associated fees) arrive at a steady rate. to be clear: the incentive issues we uncover arise not because transaction fees may arrive erratically, but because the time-varying nature of transaction fees allows for a richer set of strategic deviations that don’t arise in the block-reward model. at a high level, there is an analogy with pool hopping [ ]. with certain mining pool reward schemes, the miner’s ex- pected reward for participation varies over time, depend- ing on how many shares have been contributed since the pool found its last block. the concern is that miners would respond by “hopping” in real time to the pool that max- imizes their expected rewards. for another illustration of this theme, consider a future where there are multiple cryp- tocurrencies with time-varying rewards which can be mined by the same hardware. perhaps this will give rise to coin- hopping, i.e., miners hopping to the cryptocurrency with the largest transaction fee pool. contribution : a mining strategy simulator. while we establish a number of theoretical results in sections and , the variety of possible parameters and assumptions makes it completely infeasible to pose a perfectly accurate game-theoretic model of bitcoin that is also tractable. to fill the gaps and to confirm our theoretical results, we’ve built a mining strategy simulator. theoretical results in simple yet principled models provide good intuition to guide practice, and simulations of more complex scenarios confirm that these results have applicability to more realistic models where mathematical proofs are intractable. miners in our simulation learn over time which strate- gies are successful using no-regret learning algorithms that iteratively update a probability distribution over strategies (section . ). our simulator is versatile and allows model- ing different numbers of miners, hash power distributions, network latencies, and reward schemes. we show how it allows researchers to quickly prototype and study new set- tings within this parameter space. the simulator does have limitations: it cannot model mining pools or a non-constant arrival rate of transactions. we have made the simulator open source. in addition to the versatility of settings, our simulator allows exploring a large space of mining strategies, defined by the miner’s responses to three questions: which block to extend, how much of the outstanding transactions to include in the block, and when to publish found blocks. we define a formal language to compactly express any strategy in this space (section ). contribution : undercutting attacks. the focus of this paper is on analyzing deviant mining strategies in the transaction-fee regime that can harm bitcoin’s security. we begin with the observation that if there is a -block fork, it is more profitable for the next miner to break the tie by extending the block that leaves the most available transaction fees rather than the oldest-seen block. we call this strategy pettycompliant. once any non-zero fraction of miners is pettycompli- ant, it enables various strategies that are more aggressive and harmful to bitcoin consensus. we call this the undercut- ting attack, where miners will actively fork the head of the chain and leave transactions unclaimed in the hope of incen- tivizing pettycompliant miners to build on their block. in some scenarios, our simulation reveals a non-equilibrium with increasingly aggressive undercutting. but with an ex- panded strategy space, and suitable assumptions, we are able to prove that an equilibrium exists. however, it is one where miners include only a fraction of available transactions https://github.com/citp/mining simulator into their blocks. this results in a backlog of transactions whose size grows indefinitely with time. we confirm this result using simulation. accurately predicting the steady-state mining behavior requires modeling a vast number of variables such as miners’ cost structure, and is not the goal of our work. instead, our results can be seen as an informal “lower bound” on the departures from compliant behavior that are likely in a transaction-fee regime. we can realistically predict that pettycompliant miners will arise, and that the existence of such miners opens the field for various more aggressive strategies (section ). contribution : revisiting selfish mining. we re- visit the selfish mining strategy of eyal and sirer [ ] and show that, contrary to intuition, it performs even better in the transaction-fee regime than in the block-reward regime. next, we propose a more sophisticated selfish mining strat- egy that accounts for the non-uniformity of rewards and out- performs both default mining and “classic” selfish mining. worse, unlike classic selfish mining, this strategy works for miners with arbitrarily low hash power and regardless of their connectedness in the bitcoin network. moreover, the attack is profitable as soon as it is deployed, whereas classic selfish mining only becomes profitable after a two-week dif- ficulty adjustment period, arguably giving the community a crucial window of time to detect and respond to such an attack [ ]. we validate these results via both theory and simulation (section ). impact on bitcoin security. if any of the deviant min- ing strategies we explore were to be deployed, the impact on bitcoin’s security would be serious. at best, the block chain will have a significant fraction of stale or orphaned blocks due to constant forks, making % attacks much eas- ier and increasing the transaction confirmation time. at worst, consensus will break down due to block withholding or increasingly aggressive undercutting. this suggests a fundamental rethinking of the role of block rewards in cryptocurrency design. nakamoto appears to have viewed the block reward as a necessary but temporary evil to achieve an initial allocation of bitcoins in the absence of a central authority, with the transaction fee regime being the ideal, inflation-free steady state of the system. but our work shows that incentivizing compliant miner behavior in the transaction fee regime is a significantly more daunting task than in the block reward regime. perhaps instead, de- signers of new cryptocurrencies must resign themselves to the inevitability of monetary inflation and make the block reward permanent. transaction fees would still exist, but merely as an incentive for miners to include transactions in their blocks. . related work several recent works analyze incentives in bitcoin min- ing. some examples include [ ] and [ ], which analyze how strategic mining pools may attack competing pools in vari- ous ways, and [ ], which analyzes how strategic ethereum miners can trick others into wasting their computational power verifying the validity of complex scripts. understand- ing miner incentives in the bitcoin system is important — there is empirical evidence that miners/mining pools are willing to attack others in order to maximize their own prof- its (e.g. launching ddos attacks against other pools) [ ]. eyal and sirer develop the selfish mining attack [ ], a de- viant mining strategy that enables miners to get more than their fair share of rewards. we build on their results in sec- tion . other works, notably sapirshtein et al. [ ] have analyzed selfish mining in more detail using markov deci- sion processes (mdp). in an mdp, a player moves through a discrete state space and tries to maximize reward (the state- transition function and reward function are probabilistic). this makes it a good fit for modeling bitcoin mining. in the fixed-reward model, states are discrete. in the transaction fees model, states are continuous, so we cannot apply mdp machinery directly. still, our analysis takes an mdp-like approach. in more recent work, kiayias et. al. [ ] perform a theoretical analysis of various selfish mining strategies in the fixed-reward model, and proves that when miners are sufficiently small, the default mining behavior is an equilib- rium. there is some work on understanding the market for trans- action fees and its relation to the block size (i.e. what fees will users have to pay in order for transactions to be in- cluded in a block?) [ , , , ]. our work avoids this discussion; we show that undesirable behavior emerges even if the market reaches an equilibrium where transaction fees are non-negligible, and arrive steadily and reliably. inter- estingly, möser and böhme reach the same conclusion as us (that monetary inflation is a preferable mechanism to trans- action fees) through very different methods [ ]. on the simulation side, numerous prior works have devel- oped simulators for some aspect of bitcoin. some simulators are aimed at aspects of bitcoin aside from strategic min- ing, such as privacy [ ], or the peer-to-peer network [ ]. those developed in [ ] and [ ] also focus on simulating de- viant mining strategies, but our understanding is that these simulators are tailor-made for the specific deviant strategies they wish to test. in comparison, our simulator allows for easy implementation of a broad range of strategies in var- ious environments. indeed, the versatility of our simulator is crucial for getting intuition for every result in this paper. we have made it open-source and hope it will be a useful tool for future research on strategic miner behavior. . model and strategies in this section, we cover the model of bitcoin that we investigate. we will use this model to quickly illustrate how the switch to transaction-fee dominated rewards may lead to interesting and potentially harmful effects for bitcoin. we also introduce a formal language for describing bitcoin strategies that we will use throughout the paper. . model of the system briefly, let us describe the theme of our model before get- ting into specific details. the goal of this work is not to accurately predict exactly what mining behavior will arise in practice, but instead to uncover incentive issues that arise solely due to the time-varying nature of transaction fees ver- sus block rewards. to this end, our model is intentionally simple because we want to isolate the effects of time-varying versus fixed rewards. as an example, we will assume that transactions (and their associated fees) arrive at a constant and continuous rate. we make this assumption not be- cause we necessarily predict it will hold in practice, but because without it we can’t guarantee that we’ve isolated time-varying transaction fees as the cause for any incentive issues we uncover. put another way, our results are only made stronger by simplifying assumptions, because we are claiming that weird and undesirable consequences arise even if one is willing to grant simplifying assumptions. getting to details, the model of bitcoin that we analyze is after the block reward has dropped to zero. that is, trans- action fees are the only source of revenue for miners, and we model available transaction fees as arriving to the bit- coin system at a constant rate. specifically, we assume that for any time interval i of length t, the total sum of transac- tion fees for transactions announced during i is t (the choice of t instead of ct for some constant c is just normalization). this is different from bitcoin as it is today with a large block reward compared to the small transaction fees, but this sce- nario is consistent with the vision of the long-term steady state behaviour of bitcoin after all bitcoins have eventually been minted. we also assume that the difficulty is set so that a hash puz- zle is solved by someone in the network every one time unit in expectation (this is again just a normalization). addition- ally, for simplicity, in our theoretical results and reported simulations we model the network having no latency (unless otherwise stated). once a miner publishes a block, all other miners immediately gain knowledge of it. similarly, once a transaction is announced, all miners immediately learn of its existence. however, our simulator is capable of simulating latency of both types, and we do not see any substantive change in our results as latency changes. finally, we assume that when there are r transaction fees available, the miner can choose to include any real-valued number of transaction fees between and r in their block. that is, transactions are fine-grained enough that a miner can selectively choose a set of transactions whose fees are very close to whatever real-valued target they have in mind. we believe this is a reasonable approximation due to the large number of transactions per block. we also assume that miners always have space to include all available transactions. if the block size is not large enough to meet demand for transactions, we believe the qualitative content of all our results continue to hold, but the quantitative impact is mitigated. this belief is supported by the following data, taken from the most recent blocks (roughly one week’s worth) as of july , : of these blocks, are full. of the full blocks, the total sum of transaction fees ranges from . btc to . btc. the mean is . btc and the standard deviation is . btc, more than half the mean. it’s unclear how to extrapolate these data to the future, but it is clear that there will indeed be fluctuation in the available fees that fit in a block. so if the block size is not large enough to meet demand for trans- actions, even though the available fees immediately after a block is found will not be zero (as in our analysis), they may be significantly lower than (say) ten minutes later. so even though our exact analysis will not apply in this setting, the intuition does carry over. . what could go wrong? the mining gap without a block reward, immediately after a block is found there is zero expected reward for min- ing but nonzero electricity cost, making it unprof- itable for any miner to mine. in order to provide insight as to how time-varying rewards figure : illustration of mining gaps. miners will only mine when the instantaneous expected reward exceeds the instantaneous cost. could be harmful for bitcoin, let’s walk through an example. imagine that we are in the model previously described, that all miners are using the default compliant strategy (mine on top of the longest chain, authorize all available transactions, publish immediately), but also that that miners have some cost in electricity to run their mining rigs (i.e., running one rig for t units of time costs pt bitcoin worth of electricity). now, immediately after a block is found, there will be no more transactions in the network to be claimed by a miner making the next block. this means that for the instant fol- lowing the discovery of a new block, there is actually zero expected reward for mining, but a non-zero electricity cost for doing so! figure shows how to extend this reasoning to the time period beyond. essentially, every instant your rig is running, you claim some expected reward, which increases depending on the available transaction fees. but every in- stant your rig is running, you also have to pay a constant amount for electricity. so the expected reward for running your rig won’t exceed the cost of electricity until some min- imum number of transaction fees are available to include. if a is the fraction of the total (effective) hash power that a single rig generates, then a miner must wait t = p/a time steps after a block is found before mining becomes profitable again. in appendix a, we discuss in more detail the effects of such a mining gap, and find that it leads to miners mining for a smaller and smaller fraction of the time between the ar- rival of blocks (with the difficulty dropping to compensate). clearly, this would have a negative impact for bitcoin secu- rity, as the effective hash power in the network would drop, and it would become easier for a malicious miner to fork. of course, turning a rig on and off every ten minutes may be practically infeasible. nevertheless, this analysis illustrates that strategic miners might look for ways to deviate when the default protocol would have them wasting electricity to mine a near-valueless block. . formal language for mining strategies in the rest of this paper, we focus on mining strategies that always mine within the same cryptocurrency, but may deviate from the default protocol in choosing how to build blocks and what to do with them once they’re found. we consider a variety of known and novel bitcoin mining strate- gies. all of these can be formalized into the same general structure. at each instant, every miner makes several dis- tinct decisions: • which block to extend. • how much of the available transactions (and associated fees) to include in the block they are solving. • for each unpublished block, whether or not to publish. the first decision is which block to extend. as an ex- ample, the default compliant miner chooses to mine on the longest chain that they are aware of, and in the case of multi- ple blocks that are tied for the longest chain, they will favor mining on the first of these blocks that they became aware of. this decision forms the basis for how a mining strat- egy will determine which side of a fork it wants to support, or, alternatively, if the miner wants to create a new fork. the next decision is how much of the available transaction fees to claim. again, as an example, the default compliant miner will include all of the unclaimed transaction fees they are aware of in their block. the final decision is when to publish blocks. when a miner mines a block, only they are aware of its existence. at each moment, miners can choose whether or not to alert the other miners of the block that they have found. this allows for mining strategies where miners intentionally choose to not reveal their blocks (such as selfish mining [ ]). we define the following concepts in order to more rig- orously describe the mining strategies: first, for a set of transactions t, we will abuse notation and use t to also denote the total transaction fees included for transactions in t. for a block, b, we will denote tx(b) to be the set of transactions included in block b, and rem(b) to denote the remaining transactions after block b. that is, rem(b) contains all announced transactions in that are not included in b or any of its predecessors (thus, this is a set that varies over time). we will also use height(b) to denote the height of a block (i.e. the height of a chain that ends at block b), denoting by h the height of the current longest chain that has been announced, and owner(b) to denote the miner that produced block b. when a miner m is deciding which block of height i to extend in the case of a tie, all strategies considered in this pa- per first select a block that they themselves mined (owner(b) = m). also, all strategies in this paper avoid mining multiple blocks at the same height, so if a block with owner(b) = m at height i exists, it would be unique. if m did not pro- duce any blocks at height i, the default client would then select the first block that m became aware of. so we define oldestmi to be the unique block of height i produced by miner m if it exists, or the first block of height i that m became aware of. note that if i = h, then this is the block m would extend using the default strategy. we also define mosti to be the block of height i that maximizes the remain- ing transaction fees (formally: argmaxb|height(b)=i{rem(b)}). note that while rem(b) changes over time, the block mosti can only change if a new block of height i is published. fi- nally, we denote by mostmi the block of height i produced by m (if it exists), or the block of height i that maximizes the remaining transaction fees otherwise. so for instance, if a chain of height has been announced, but some miner is privately storing a chain of length , we would define h = . we can now formally define mining strategies we consider. we model strategies as time-driven (rather than event-driven): in every infinitesimally small time step, the miner must de- cide which block to extend (denoted by mining(m)), what set of transactions to include, and for each of their own unpublished blocks, whether to publish. note that by pub- lishing a block b, we mean ensuring that every node in the network is aware of b and all its predecessors, and aren’t concerned with exactly what physical measures m takes to ensure this. in this language, the default mining strategy would be formalized as follows: defaultcompliant: the default bitcoin mining strategy, including all avail- able transactions, mining on the end of the longest chain, choosing the older block in a tie, and publishing all blocks. which block: mining(m) = oldestmh . how much: include rem(mining(m)). publish(b)?: yes. . mining strategy simulator in order to more clearly analyze what the game theoretic landscape will look like once the bitcoin mining incentive be- comes transaction fee based instead of block reward based, we have developed a versatile bitcoin mining strategy sim- ulator. here we discuss the strategies our simulator is ca- pable of implementing, the process by which our simulator can explore a strategy space, the configurable parameters of the simulator, and its limitations. . strategies, rounds, and games we first describe the basic units of our simulator and how they interact with each other before getting into details. strategies. the simulator is designed in such a way to be able to run any strategy that fits the strategy space detailed in section . . that is, every strategy is fully defined by a function that outputs a block to extend, a set of trans- actions to include, and a rule to decide whether to publish any found blocks. all of these functions may take as input any public information, including all published blocks and all announced transactions. rounds. our simulator is time-driven, as opposed to event- driven. we made this decision because we want it to be easy to add new strategies to the simulator. in an event-driven simulation, new strategies would be limited by the current list of possible events. however, in our time-based simula- tions, any strategy that details how to make the decisions above at any moment can be easily implemented. a round is the smallest unit of time in our simulator (cur- rently, / of the time it takes for the entire network to find a block). during a round, every miner first takes as in- put the block chain (that they’re aware of) and all transac- tions (that they’re aware of) and decides which block to (try to) extend, and which transactions to include. then there is a random check (as a function of that miner’s hash rate while this is the original motivation for developing our simulator, it is indeed capable of simulating non-zero block reward as well — more on that in section . . and the current network difficulty) to determine whether the miner successfully found a block or not. then, the miner de- cides which unpublished blocks to publish. the duration of a round is a configurable parameter, which we discuss shortly in section . . games. a game involves setting parameters such as choos- ing a number of miners, assigning their strategies and hash power, etc. (all detailed in section . ). once these parame- ters are set, a game runs for several rounds, and keeps track of the rewards earned by each miner. simulations. a simulation might consist of a single game (to see how certain strategies fare against each other), or several games with parameter adjustments in between. for example, in order to model miners who learn over time, we have them play several games and decide which strategies to use in future games based on results of past games. in principle, any parameters can be adjusted between games. . strategy exploration for several of our simulations we want miners to utilize the strategies that are doing the best, to simulate how strate- gic miners might adapt over time. in order to accomplish this, we run several games, with hundreds of miners in each game. miners choose strategies proportional to how success- ful those strategies have historically done. formally, min- ers in our simulator perform no-regret learning, a standard notion of learning that is popular in game theoretic con- texts. this is due to the fact that in any repeated game where each player separately performs no-regret learning, the repeated play converges to a coarse correlated equilib- rium [ , ]. moreover, numerous simple no-regret learning algorithms are known that converge quickly (i.e. in a num- ber of rounds sublinear in the number of possible strate- gies) [ , , , ]. if a miner has no regret, their total re- ward across all of time is at least as good as had they instead picked “the best” strategy and used it in every game. sim- ilarly, a coarse correlated equilibrium is a joint distribution over strategy profiles such that every miner gets more ex- pected payoff by following the equilibrium than deviating to any possible strategy. these learning algorithms all maintain a weight for every strategy, and adjust the weights of the strategies from game to game depending on how well they’re doing. our simulator offers two alternatives for these update rules. the first al- ternative is an exact implementation of the exp algorithm for learning with adversarial bandits [ , ]. this update rule provides a theoretical guarantee on the regret of each miner as a function of the number of games played and a tunable parameter in the update rule, �. the second alternative is based on the multiplicative weights update rule (mwu) for learning with experts [ , ]. we find that mwu is com- putationally expensive, so we use a less expensive proxy in- stead. that means there is no theoretical guarantee on the regret bounds. but in practice this update rule is signifi- cantly faster and does converge quickly to coarse correlated equilibrium. for a further discussion of these update rules, see appendix b. all of the figures included in this paper were generated from simulations using exp , so they come with a theo- retical guarantee that all miners in the simulation have no regret. . versatility our simulator has many configurable parameters: strategies. just to reiterate: every miner in our simulator is assigned a time-driven strategy that chooses which block to extend, how many transactions to include, and whether to publish any found blocks. any strategy that fits this framework can be implemented in the simulator. to design a new strategy, a user would create a new function that takes as input the current public state of bitcoin network (the blockchain and available transaction fees), and the miner who is using the strategy. the function would then use this information to determine which block to extend, and how many of the transaction fees to include in the next block. finally, the user would go to the publication rules and add a rule for how the strategy should choose whether or not to publish any found blocks. hash power. every miner m is assigned a hash power αm. any number of miners, and any αm such that ∑ m αm = can be supported. round duration. the size of a round can be set so the network finds a block every r rounds in expectation, for any r > . rewards. at the end of each game, miners are rewarded based on their blocks within the longest chain. the reward they receive is b per block (fixed reward), plus any trans- action fees. a transaction fees accrue in the system every round. both of these parameters are configurable. costs. there is a configurable parameter cm for every miner m that denotes the cost (i.e. in electricity) for miner m to mine. for our simulations, we always set cm = because we aren’t looking at this aspect of mining. latency. if desired, latency can be introduced to the simu- lation. there is a configurable parameter λ such that when blocks are published, it takes λ rounds before other miners are aware of this blocks existence. latency in hearing about transactions can also be implemented — it is currently easi- est to do this by modifying strategies to randomly “pretend” they haven’t heard of some transactions. learning parameter. our learning rules are parameter- ized by an � ∈ [ , / ]. for exp , it is customary to set � ≈ √ n ln n/t, where n is the number of strategies consid- ered and t is the number of games played. for mwu (and our “mwu-like” update rule), it is customary to set � ≈√ ln n/t. larger � encourages beliefs (about the strength of strategies) to be updated rapidly in response to recent games. smaller � encourages waiting for more evidence be- fore updating beliefs. atomic versus non-atomic miners. we say miners are atomic if there are finitely many of them, and each has a finite fraction of the total hash power. such miners may have an interest in sacrificing immediate gains related to a block mined now in order to achieve greater gains for blocks mined in the future. non-atomic miners are infinitesimally small, but there are infinitely many of them. when such miners find a block, they are only interested in maximizing their gains related to that block (because they will never find another block in the future). obviously our simulation cannot create infinitely many miners, but we can functionally simulate them. to simulate that an α fraction of non-atomic miners are using strategy s, we instead create a single atomic miner with an α fraction of the hash power, and ensure that all of this miner’s strategic decisions take as input only the public information available to the entire network, and does not treat “their own” blocks any differently than generic blocks. of course, the real world is atomic. but it is extremely helpful to compare simulation results between the two mod- els to isolate behavior that arises only when miners are atomic (example: selfish mining), as intuitively this behav- ior “gets worse” with big miners (as with selfish mining). . implementation and performance. the simulator is written in c++, and has a running time proportional to the product of the number of games, the number of rounds per game, and the number of miners. we find that for accurate results, the games need to include enough rounds so that that for every strategy, the miners using it together find tens of blocks. we also find that it takes on the order of a few hundred thousand games for our learning algorithms to converge to an equilibrium. on a commodity laptop with a . ghz intel core i proces- sor, running a simulation of games with miners, an average interarrival time of rounds, and a total of , , rounds (≈ , blocks will be created), takes ap- proximately seconds. limitations. a current limitation of the simulator is that the transaction fees can only be modeled as coming in at a uniform rate in time. additionally, the simulator is not capable of modeling mining pool dynamics beyond treating them as a single miner with hash power equal to that of the pool. this doesn’t allow for consideration of attacks such as those presented in [ ]. . new deviant mining behavior in this section, we examine what deviant mining behavior might unfold in the transaction fees model that doesn’t arise in the block-reward model. specifically, we argue that: • it is reasonable to expect self-interested miners to be- come pettycompliant instead of defaultcompli- ant once transaction fees take over. • the existence of pettycompliant miners in the net- work opens the field for a range of aggressive strategies with detrimental effects to bitcoin’s stability. . phase one: petty compliant observation: the default client behavior of min- ing on the oldest block is not optimal. miners can do strictly better by mining on the block that leaves the most transactions fees unclaimed. consider the case where there is a fork: two blocks are tied for longest chain. the traditional behavior, and the one programmed into the default client, would have the miner select the older of the two potential block heads. however, there is really no cost for that miner instead to tie-break arbitrarily. in particular, if the miner is planning to in- clude all unclaimed transactions in their block, it would be in that miner’s interest not to mine on the oldest block, but instead the block that leaves the most remaining fees. therefore, a strategic miner would want to mine on mostmh instead of oldestmh . we call this strategy petty compliant, note: this is not a self-enforcing part of the protocol. it’s purely client-side behavior. as it is still mining on a longest chain, including all available transactions, and publishing all blocks that are found (like a default compliant miner). it is just tie-breaking between longest chains in a “petty” way to achieve greater revenue. pettycompliant: mine like a default compliant miner, except when choos- ing between two sides in a fork; mine on the block that has claimed the fewest transaction fees. which block: mining(m) = mostmh . how much: include rem(mining(m)). publish(b)? yes. if forks ever exist, then pettycompliant strictly outper- forms defaultcompliant. the two are identical except for the case where the miner is required to choose between two equal height blocks to mine on. in this case pettycompli- ant always makes the decision to mine in a location that maximizes their rewards, and defaultcompliant might not. in our mining strategy simulator, we compare de- faultcompliant to pettycompliant and do in fact see that pettycompliant outperforms defaultcompliant, regardless of the breakdown of other miners in any simula- tion where there is enough latency (in learning of both new blocks and transactions) that forks naturally occur. note that the existence of petty compliant miners is not necessarily harmful by itself: so what if miners are tie- breaking differently in the rare event that forks naturally occur? the problem arises when other strategic miners no- tice the existence of petty compliant miners and choose to exploit this with more aggressive tactics. we’ll see some examples of this in the remainder of this section. the ex- istence of pettycompliant miners impact other deviant strategies in surprising ways too. for example, a selfish miner (discussed more in section ), performs better against pettycompliant miners than defaultcompliant. . phase two: lazy undercutting observation: once some fraction of miners is petty compliant, other miners may profit by in- tentionally forking the chain. the key insight for more aggressive strategies is that a deviant miner can incentivize petty compliant miners to ex- tend their block, even if an older block of the same height was discovered several minutes earlier, for instance, by ex- tending that block’s direct predecessor and including slightly fewer transaction fees. if the current unauthorized transac- tion fees are substantially fewer than those included by the current mosth, then maybe it is in a miner’s interest to try and replace mosth with a new block of height h, instead of continuing on top of it. we call this undercutting. so what might a strategic miner do to take advantage of this? they might first compare between the maximum rewards they could get by continuing versus undercutting (while still becoming the new mosth), and mine on top of whichever block yields greater rewards. then, to protect themselves with certainty against future undercutters using the same rule, they could take half of the remaining transac- tions. because of the somewhat lax reasoning used to choose these parameters, we call this strategy lazyfork. while the existence of pettycompliant miners them- selves is relatively benign, the existence of lazyfork min- ers would be bad: they frequently decide to intentionally orphan blocks in order to achieve greater rewards. in addi- tion to creating uncertainty about when blocks are “safely” in the eventual longest chain, this decreases the effective hash power of the network and makes bitcoin more prone to double spend attacks. for cleanliness in formally defining lazyfork and other undercutting strategies, we introduce the notation gapi = rem(mosti− ) − rem(mosti), the maximum transaction fees that a miner could include while mining on top of mosti− to become the new mosti. lazyfork: forks the blockchain if the head block is more valuable than the unclaimed transaction fees it leaves behind. only takes half of the possible transaction fees to prevent other lazy forkers from forking their block. which block: if owner(mostmh ) = m or rem(most m h ) ≥ gaph mining(m) = mostmh . else mining(m) = mostmh− . how much: include rem(mining(m))/ . publish(b)?: yes. . phase three: aggressive undercutting simulation result: increasingly aggressive under- cutting behavior evolves when miners strategize. once miners consider undercutting, they may also try to aggressively optimize the tradeoff between maximizing the transaction fees included in blocks they mine and minimizing the chance that their block will be undercut by other miners in the system (as opposed to using the less-principled reason- ing of lazyfork). we define these strategies so that when they are presented with rem(mining(m)) = x, they will authorize f(x) transactions, for some f(·) with f(x) ∈ [ ,x] for all x, and call them forkers. while in principle, forkers could consider going back sev- eral blocks to undercut, the strategies we study only consider mining on top of a block of height h or h − . certainly, it would be an interesting direction for future work to see if any additional gains can be achieved by considering blocks of height h − or less, but already we uncover interesting behavior when forkers go back just a single block. a function forking miner looks at potential blocks at height h that they could extend, and within this set considers ex- tending only mostmh , since it leaves the most remaining transaction fees. if a miner indeed chooses to mine on top of mostmh , we call this continuing. they also look at poten- tial blocks of height h− , again considering only extending the block mostmh− from this set. if a miner indeed chooses to mine on top of mostmh− , we call this undercutting. when deciding whether to continue or undercut, a forker simply observes that they will choose to claim f(rem(mostmh )) by continuing, versus min{f(rem(mostmh− )),gaph} if they undercut (the min is taken because they must actually un- dercut in order to incentivize future miners to select their figure : normalized weights of different linear co- efficient function forking strategies over a series of games. strategies that are slightly more aggres- sive than the most common strategy perform the best and have their normalized weights increase. this simulation had miners, strategies, , blocks per game and an � value of . . block). so for a given f, we can define: valcont(f) = f(rem(most m h )) valunder(f) = min{f(rem(mostmh− )),gaph} if valcont(f) > valunder(f), then more rewards can be achieved by continuing. otherwise, more rewards can be achieved through undercutting. formally, for any function f(·), this induces the following formal strategy: function-fork(f): always takes a certain function, f(·), of the possible transactions it could claim. always mines in the location to maximize the size of the block they would make, with the constraint that if they fork, they must undercut. which block: if owner(mostmh ) = m or valcont(f) > valunder(f) mining(m) = mostmh . else mining(m) = mostmh− . how much: if mining(m) = mostmh include valcont(f). else include valunder(f). publish(b)?: yes. any reasonable choice of f(·) will be monotonically in- creasing, which means that f(mostmh− ) will always be larger than f(mostmh ), so the decision on whether to continue or undercut will come down to a comparison of f(mostmh ) ver- sus gaph. one natural family of f(·) to consider is linear functions (that is, f(x) = kx for some k ∈ [ , ]). if we take a group of figure : this is a simulation of atomic miners. the simulation parameters are otherwise configured the same way as figure . we see that when there are a small number of atomic miners the more ag- gressive undercutters are no longer effective since they are beaten by more gentle forkers who are lucky enough to mine two blocks in a row. these strategies, and let non-atomic strategic miners learn over many games which perform best, we get the plot in figure . what we see is the following: when the majority of miners are using function-fork(kx), the best response is to use function-fork(k′x), for k′ a little smaller than k, (i.e. to undercut just a little bit more aggressively). so eventually the smallest coefficient in our simulation becomes dominant. if we instead consider atomic miners, we observe the be- havior in figure — less aggressive undercutters remain dominant. this is because even when other miners are ag- gressively undercutting, each miner still has a decent chance to get their block accepted “for free,” by mining two blocks in a row. note that simulation is vital to this understanding due to the large number of parameters to consider. . an undercutting equilibrium analytical result: an equilibrium exists where all miners use the same undercutting strategy. it in- duces a growing backlog of transactions. linear function-forking is of course a natural class of strate- gies to consider, but our simulations in the previous section show that long-term behavior may be erratic if miners only consider these strategies. our goal in this section is to un- derstand what undercutting behavior is stable. our approach is to find a function f(·) such that function- fork(f) is an equilibrium. that is, as long as every other miner is using the strategy function-fork(f), it is in your interest to do so as well. in other words, we would like to find an f such that function-fork(f) is a best-response to the case when all other miners themselves use function- fork(f). we provide now intuition for why the f(·) we present yields an equilibrium. so what does it mean for a strategy to be a best-response to other miner behavior? recall that a strategy proposes which block to extend, how many transaction fees to claim, and which blocks to publish as a function of the currently held information. a strategy is a best response if it maxi- mizes the miner’s expected reward (taking into account fu- ture events, and in particular the probability that the cur- rent block is in the eventual longest chain) over all potential strategies that miner could have used instead. in particular, a best-response must be at least as good as all other strate- gies that mine at the same location and publish the same blocks (but differ in which transactions to include). to get some intuition for what conditions a potential equi- librium must satisfy, let’s first consider the decision facing a miner who has already decided to continue on top of the longest chain and is just deciding how many transaction fees to include. if f denotes the number of transaction fees in- cluded, define π(f,f,x) to be the probability that this block is included in the eventual longest chain, conditioned on including f btc worth of transaction fees in the block, all other miners using strategy function-fork(f), and x = rem(mostmh ) (note that π is well-defined). then the miner’s expected reward, should they be fortunate enough to find a block right now would be f ·π(f,f,x). a best-response would then be to include argmaxf≤x{f · π(f,f,x)} transaction fees. the strategy function-fork(f) would recommend including f(x) transaction fees. so for function-fork(f) to be a best-response to other min- ers using function-fork(f), it better be the case that f(x) ∈ argmaxf≤x{f · π(f,f,x)} for all x. note that this is a somewhat strong condition on f, as the fact that the other miners are using function-fork(f) affects π(f,f,x), whereas we also want this miner’s best response to have f(x) ∈ argmaxf≤x{f ·π(f,f,x)}. at this point, we show that there is a continuous and piece-wise differentiable function f(·) that satisfies this con- dition. we also show that combined with the fact that f(·) is monotonically non-decreasing, this is sufficient for function-fork(f) to be an equilibrium under some as- sumptions (which we will discuss post-theorem). in the theorem statement below, w is the upper branch of the lambert w function which satisfies w (xe x) = x for all x ∈ [− /e,∞), and w (x) ∈ [− ,∞). the “furthermore...” portion of the theorem is proved by showing a connection between the number of backlogged transactions and an un- biased single-dimensional random walk. theorem . . for any constant y ≤ / such that y − ln(y) ≥ , define: f(x) = x, ∀ x ≤ y ( ) f(x) = −w (−yex− y), ∀ y < x < y − ln(y) − ( ) f(x) = , ∀ x ≥ y − ln(y) − ( ) then it is an equilibrium for every miner to use the strategy function-fork(f) as long as: • every miner is non-atomic. • miners may only mine on top of chains of length h or h− . furthermore, in any such equilibrium, the expected number of backlogged transactions after n time steps is Θ( √ n). such y exist. this range is ( ,≈ . ]. figure : plot of the lambert function fork starting with a weight of . and becoming the strongest strategy in a learning simulation with � = . . this simulation had miners, and , blocks per game. these miners are non-atomic. a proof of theorem . appears in appendix c. to un- derstand the impact of theorem . , first consider the block reward model. with non-atomic miners, defaultcompli- ant is trivially an equilibrium, and this result is robust to general models of latency (proof in appendix d). but as we move to atomic miners, strategies like selfish mining arise and equilibria get messy (if they exist at all). now, in the transaction-fee model, even when miners are non-atomic, equilibrium behavior is complex and undesirable, as we have just shown. therefore, we should expect that analysis with atomic miners should conclude with even more chaos. figure shows miners learning to play this equilibrium, even with various other strategies available. observe the in- terplay between theory and simulation: theorem . guides us towards a potentially strong strategy, but it is intractable to prove that the equilibrium will actually arise via learning even when (say) % of miners are already there. simula- tion fills the gap and shows an equilibrium will indeed even when only . % of the miners initially use the equilibrium strategy. simulation alone could not search through the infinitely many possible strategies, and theory alone cannot prove that learning converges to the desired equilibrium. . undercutting non-strategic miners analytical and simulation result: even if % of miners remain default compliant, undercutting is profitable. our analysis and simulations in the previous sections as- sumed that all miners were strategic learners. while we clearly learn a lot from this analysis, it is perhaps more realistic to also consider a setting where some miners will stubbornly (or honestly, depending on your perspective), continue running defaultcompliant even if it is subop- timal. if a large fraction of the miners are non-strategic, it is hard to see in figure , but the weight assigned to “lambert” is initially . . then function-forking becomes immediately less profitable, because only a small fraction of the network will actually mine on top of your block when you undercut. in particu- lar, if % of other miners are non-strategic, undercutting serves no purpose. in this section, we detail results from our simulation when varying fractions of miners are non-strategic. in these sim- ulations, we fix a fraction of the network to always mine defaultcompliant, and play enough games until the dis- tribution of learned strategies stabilize. figure shows a stacked area plot of our simulation results for equilibria at different fractions of miners refusing to abandon default- compliant. there are many interesting features of the plot, but we focus on one: even if the majority of miners choose to stay defaultcompliant (and the rest strategize), then forking strategies start to become viable. a theoretical analysis indeed predicts the continuing pres- ence of functionfork(x) until / of the miners remain defaultcompliant. to see this, imagine that every miner is the system is currently defaultcompliant or petty- compliant, and we want to see if it is profitable for a pet- tycompliant miner to switch to functionfork(x). at any point in time, consider the current mosth. then if the miner runs pettycompliant, they will always try to con- tinue, and will get rem(mosth) should they find a block (because no one else in the network is undercutting). if in- stead they run functionfork(x), they will continue when- ever rem(mosth) > gaph and undercut otherwise. when they continue, they will always get rem(mosth). when they undercut, they would include gaph transaction fees. if the next miner to find a block is pettycompliant (or this miner), then the undercut will be successful and the miner will receive gaph in rewards. but if the next block is found by a defaultcompliant miner, the undercut fails and they get nothing. so if y is the fraction of the network that remains defaultcompliant, we see that the expected reward obtained by functionfork(x) is proportional to: we emphasize that while the theory gives us a crisp un- derstanding of what should happen when exactly / of the miners are defaultcompliant, it is intractable to rigor- ously analyze the equilibria at various other fractions of defaultcompliant miners. thus our simulation both con- firms and extends our theoretical understanding (figure ). e[rem(mosth) · i(rem(mosth) > gaph)] + ( −y) ·e[gaph · i(gaph > gaph)] finally, because rem(mosth) and gaph are i.i.d. exponen- tial random variables with mean , we have that e[gaph · i(gaph > rem(mosth))] = e[rem(mosth)·i(rem(mosth) > gaph)] = / . therefore, whenever y ≤ / , the reward from functionfork(x) is at least one, and therefore it is a better choice than pettycompliant (which gets expected reward exactly one). note that learning is by no means guaranteed to result in a static equilibrium at all, although in these simulations that happens to be the result. e[x] denotes the expectation of the random variable x, and i(e) denotes the indicator random variable for event e (that is when e occurs and otherwise). figure : stacked area chart showing the equilib- rium distributions of strategies covered thus far, given that a fraction of miners will always use the de- fault strategy. these simulations involved min- ers, with , blocks per game. we found that the strategies would reach an equilibrium around games with � = . . . selfish mining with transaction fees selfish mining is a deviant strategy first identified by eyal and sirer [ ]. essentially, a selfish miner chooses not to release blocks immediately upon being found, instead with- holding them in hopes of tricking the rest of the network into wasting their mining power mining blocks that will be orphaned. we find that the selfish mining strategy performs even better in the transaction fees model than the block-reward model. a priori, there’s no reason to expect this. in this sec- tion we provide simulation results, along with some intuition and a theoretical analysis proving this. essentially what winds up happening is that while the selfish miner mines the same fraction of blocks in either reward model, the self- ish miner’s blocks will tend to be larger. in the block-reward model, this doesn’t matter because all blocks are worth the same, but in the transaction fees model this means the self- ish miner gets greater reward. . the selfish mining strategy analytical and simulation result: selfish mining performs slightly better in the transaction fee model. the goal of a miner employing the selfish mining strategy is to essentially trick the other miners in the bitcoin network to mine on top of a block that will be orphaned. by having other miners waste their power, the selfish miner is capa- ble of exaggerating their own portion of the overall network hash-rate. selfish miners do this by maintaining a chain in private that only they know about. when the selfish miner initially finds a block, they will not announce their block to the rest of the network. they will continue to mine on their private block, hoping to find a second block before the rest of the network finds a block. if the miner succeeds, now they’re in a very strong posi- tion: they know of a block with height h + , whereas the rest of the network only knows a block of height h. if the rest of the network finds the next block at height h + , the selfish miner can reveal their private chain and the public block will be immediately orphaned. of course, maybe the selfish miner will find the third block as well. in this case, they’re in an even better position and can waste even more of the network’s power. but the point is that with a lead of two or more, the selfish miner can guarantee that the rest of the network is wasting power. of course, the selfish miner might also fail to find a sec- ond block before the rest of the network finds their first. in this case, they immediately release their block and hope that others hear about theirs first. obviously this is not ideal: had they released their block immediately, they could have guaranteed that it was heard about first. so there’s a tradeoff — withholding the block has a chance to give the selfish miner a private chain of length two or more, in which case the selfish miner benefits, but it could also cause their block to be orphaned, resulting in less profits. selfish-mine: selfish mining strategy from [ ]. this miner hides their blocks, which risks losing their first block, in order to try to get the rest of the network mining in a useless location, amplifying their own apparent hash power. which block: oldestmprivatem . how much: include rem(mining(m)). publish(b)?: if height(b) = h yes. elseif racingmh , and private m = h + yes. else no. assuming the selfish miner has less than half of the overall hash power of the network, they will eventually need to pub- lish their private chain. in order to maintain our focus on the difference between transaction fees and fixed block-rewards, we consider just “vanilla” selfish mining, although it is an interesting consideration for future work to consider selfish miners who also undercut, or various other generalizations (e.g. [ , , ]). similarly to [ ], we examine the potential rewards a selfish miner would receive assuming that the rest of the network is default mining. in our analysis, we also use α to denote the fraction of the total mining power pos- sessed by the selfish miner, and γ to be the probability that in the event of a race (selfish miner is triggered to release a private block of length one) that ends with the honest portion of the network finding the next block, that the self- ish miner’s block is not orphaned. we introduce notation privatem to denote the height of the longest chain that m is aware of (at least as long as h, and possibly longer if m is keeping any blocks private). we also introduce notation racingmi to be a boolean variable that is true iff there exist two blocks b ,b with height(b ) = height(b ) = i, and owner(b ) = m = owner(b ). in other words, racingmi denotes whether or not there are two competing blocks of height i, one of which was produced by m. analysis. we proceed now with an analysis of the rewards obtained in the transaction fee model by a selfish miner. parts will look similar to the analysis done in [ ]. for every infinitesi- mally small transaction fee that arrives, we wish to compute the probability that it winds up in a block mined by the self- ish miner. note that if the selfish miner just used default mining instead, this probability would be exactly α. the determining factor in this probability will be the size of the selfish miner’s private chain. to this end, let’s de- fine the following states (same states used in [ ]), and we’ll compute this probability separately for each state. • state : everyone agrees on the longest chain — racingmh = false. • state i > : the selfish miner m has a private chain of length i — privatem = h + i. • state ′: there are competing blocks of height h, one of which was produced by the selfish miner, and the selfish miner has no private blocks — racingmh = true and privatem = h. let fs denote the probability that a transaction winds up in a block mined by the selfish miner in the eventual longest chain, conditioned on the system being in state s when the transaction is announced. we compute there probabilities below. if we then define ps to be the probability that the system is in state s, we can then observe that the expected fraction of transaction fees claimed by the selfish miner is exactly ∑ s fs ·ps. eyal and sirer [ ] have already computed ps for all s. the values for ps are: p = − α α − α + p ′ = ( −α)(α− α ) α − α + pi = ( α −α ) i− α− α α − α + , i > to complete the analysis, we just need to compute fs for each s. appendix e contains the derivation of fs for all s, which are stated below: f = α + α( −α) (α + γ( −α)) . f ′ = α. f = α + ( −α)α = α( −α). fi = − (( −α)i− ( −f )). finally, when α ∈ ( , . ) and γ ∈ [ , ], we show in the appendix e that the selfish miner’s rewards are given by reward(α,γ) = α − α + α − α + γ(α− α + α − α + α ) α − α + figure : we see simulation matching the theory for selfish mining in a transaction based model for γ = , . , and . we make the following observations: • simulation confirms the above analytical formula for reward(α,γ) (figure ) • this function is extremely close to the reward function with block rewards ( α( −α) ( α+γ( − α))−α −α( +( −α)α) ) from [ ]. we find, numerically, that the absolute difference never exceeds . in the region of interest. • for ≤ γ < . (in particular, for γ = ), for all α ∈ ( , . ), the reward is strictly greater in the transaction fee model than in the block reward model. we provide some intuition for this last point. first, it is clear that the fraction of blocks mined by the selfish vs. default miners is independent of the reward model. so the gap must come from the size of blocks found by the respec- tive miners. let’s assume just for the sake of example that we are in state and the selfish miner has an α = / fraction of the mining power. almost certainly, the next un-orphaned block will be found by the selfish miner. how long will it take for this block to be found? the answer is approximately time steps. this is because while the en- tire network finds a block roughly every time step, because the selfish miner is the only miner extending his chain (and he mines at / the speed of the full network) it will take ten times as long. what this means is that blocks found by the selfish miner while the selfish miner has a huge lead are disproportionately large compared to blocks found when the selfish miner has no lead (or a tiny lead). so even though the selfish miner wins the same fraction of blocks, some of these blocks are much larger than those won by the default miners. a brief discussion. the main point of this section is to highlight one example of surprising incentive issues that dif- fer between the transaction fees model and the block-reward model, not to argue that selfish mining becomes significantly better (the improvement is minor). still, we wish to point out two possibly salient differences between selfish mining in the two models. first, in the block-reward model, self- ish mining is actually not ever immediately profitable — it only becomes profitable once the difficulty readjusts to ac- count for the fact that the effective mining power in the network is lower. this is because before the difficulty ad- justs, the selfish miner is literally just throwing blocks away, but tricking the rest of the network into throwing blocks away at a higher rate. in the transaction fees model, selfish mining is immediately profitable — every transaction that arrives goes somewhere, so neither the selfish miner nor the default miners are throwing rewards away. note also that our analysis in no way requires the difficulty to adjust before it becomes accurate — our analysis would hold no matter how the difficulty of hash puzzles adjusted or didn’t adjust over time. moreover, if some of the rest of the network has switched to the pettycompliant strategy, then the selfish miner’s block is actually more likely to win when a race is triggered (because it was mined earlier and therefore contains fewer transactions). so the existence of petty- compliant miners in the transaction fees regime indirectly improves selfish-mine’s performance by increasing γ. . an improved selfish-mine analytical and simulation result: in the transac- tion fee model, selfish miners can make the de- cision whether to hide their first block based on the value of the block. this improved selfish min- ing strictly and always outperforms both default mining and traditional selfish mining. in this section we develop an improved selfish mining strategy. essentially, we observe that in the transaction fees model, a selfish miner has additional information when de- ciding whether to hide or publish their private chain (namely, how many transactions are included). we show that, for all α,γ < , our strategy strictly outperforms both default mining and “vanilla” selfish mining in the transaction fees model. our strategy will decide to hide only “small” blocks, with at most β (some cutoff parameter chosen by the strat- egy as a function of α,γ) transaction fees included, but will immediately publish any “large” blocks, with more than β transaction fees in order to avoid the risk of losing them. selfish-mine(β): an improvement to the selfish mining strategy, where the miner will chose to mine as a selfish miner or a default compliant miner based on the value of the block they risk losing. which block: oldestmprivatem . how much: include rem(mining(m)). publish(b)? if height(b) = h or tx(b) ≥ β yes. elseif racingmh, and private m = h + yes. else no. intuitively, imagine you are mining and find yourself solv- ing a new block immediately after a previous block was announced and before any new transactions have been an- nounced. this block is literally worthless, so instead of pub- lishing, why not use it to try and selfish mine? there is figure : we show the ideal cutoff factor, β, for a selfish miner with mining power α, and γ = . no cost, but a positive probability that you build a lead of two, no matter your hash power. similarly, imagine instead that just by chance an hour goes by since the last block was found and you just solved a new block including all trans- actions that arrived during that period. this block is worth roughly six “normal” blocks, so why risk losing it? unless your hash power is very close to %, the expected gains from selfish mining are dwarfed by the possibility of losing this unusually wealthy block. so the trick is just choosing the proper cutoff β as a function of your hash power α and network connectivity γ. note that selfish-mine( ) = defaultcompliant, and that selfish-mine(∞) = selfish-mine. so clearly, tak- ing the optimal choice of β will result in a strategy that equals or outperforms both. using an analysis similar to that of section . , we are able to compute the expected reward achieved by a miner with an α fraction of the min- ing power, a γ success probability of winning a race, and using strategy selfish-mine(β). a derivation is included in appendix e. . reward(α,γ,β) =( + β( −α) ( −γ) eβ − + α + ( −α) γ + α − α − α ) × ( α( − α)( −e−β) − e−βα− ( −e−β)α ) figure contains a plot showing the optimal choice of β as a function of α when γ = . a few noteworthy points from this plot: as α → , so does the optimal β. as α → / , the optimal β approaches ∞. figure plots our theoretical predictions against simulation results, confirming that the analysis is correct. we conclude this section with figure plotting the (the- oretical) performance of default mining, selfish mining, and selfish mining with the optimal cutoff for a range of α and γ = . note that in some ranges, the gains are quite signif- icant. specifically, when α = / , both selfish mining and default mining achieve expected reward of ≈ / , but selfish mining with the optimal cutoff achieves an expected reward of ≈ . , a . % increase! test figure : theory matching simulation for a variety of cutoff thresholds for selfish mining, all with γ = . the smaller cutoffs do better for a miner with a smaller hash-power (α) and the larger cutoffs do better with a larger hash-power. intuitively, this makes sense as a more powerful miner should be willing to risk a larger block to try to selfishly mine. figure : a selfish miner using the optimal cutoff outperforms both the original selfish mining proto- col and default mining for all values of α, with γ = . the simulation points confirm that the theory is ac- curate. . impact on bitcoin and lessons for cryptocurrency design we have argued that deviant mining strategies in a transaction- fee regime could hurt the stability of bitcoin mining and harm the ecosystem. in a block chain with constant forks caused by undercutting, an attacker’s effective hash power is magnified because he will always mine to extend his own blocks whereas other miners are not unified. this would make a “ %” attack possible with much less than % of the hash power. many other unanticipated side-effects may arise. in the block size debate, it is frequently argued or assumed that space in the block chain will be a scarce resource and a market will emerge, with users being able to speed up the confirmation of a transaction by paying a sufficiently large transaction fee. but if miners intentionally “leave money on the table” when solving blocks, as is the case in undercutting attacks, it breaks this assumption. that is because under- cutting miners are not looking to maximize the transaction fee that they can claim, and don’t have a strong reason to prioritize a transaction with a high fee. put another way, the block size imposes a constraint on the total size of trans- actions in a block and the threat of being undercut imposes another constraint on the total fee. the two interact in complex ways. we believe that qualitatively our results will continue to hold in a world where the available block size is much smaller than the demand, but quantitatively the im- pact of undercutting will be mitigated (see end of section . ). still, it is an important direction for future research to understand this connection more rigorously. despite the variety of our results, we believe we have only scratched the surface of what can go wrong in a transaction- fee regime. to wit: we have not presented an analysis of miners whose strategy space includes both undercutting and selfish mining, primarily due to the complexity of the result- ing models. there has been scant attention paid to the transition to a transaction-fee regime. the nakamoto paper addresses it briefly: “the incentive can also be funded with transaction fees... once a predetermined number of coins have entered circulation, the incentive can transition entirely to transac- tion fees and be completely inflation free” [ ]. similar com- ments on the bitcoin wiki and other places suggest that the community views the transition as unremarkable. some altcoins (monero, dogecoin) have even opted to hasten the block reward halving time. our results suggest a different view. we see the block re- ward as integral to the stability of the mining game. at a minimum, analyzing equilibria in the transaction-fee regime appears dramatically harder than in the block-reward regime, which is a cause for concern by itself. the monetary infla- tion resulting from making the block reward permanent, as ethereum does, may be a small price to pay to ensure the stability of a cryptocurrency. . acknowledgments we are extremely grateful to jiechen chen, kira goldner, they do have a weak reason: miners benefit from creat- ing the smallest possible block for a given value of the to- tal transaction fee they seek to claim, since smaller blocks propagate faster through the network and are less likely to be orphaned. anna karlin, and rainer böhme for very detailed feedback on an earlier draft of this paper. . references [ ] calibrated learning and correlated equilibrium. games and economic behavior, ( ): – , . [ ] a simple adaptive procedure leading to correlated equilibrium. econometrica, ( ): – , . [ ] e. androulaki, g. o. karame, m. roeschlin, t. scherer, and s. capkun. evaluating user privacy in bitcoin. in proceedings of financial cryptography, . [ ] s. arora, e. hazan, and s. kale. the multiplicative weights update method: a meta-algorithm and applications. theory of computing, ( ): – , . [ ] p. auer, n. cesa-bianchi, y. freund, and r. e. schapire. the nonstochastic multiarmed bandit problem. siam journal of computing, ( ): – , . [ ] a. blum and y. mansour. from external to internal regret. journal of machine learning research, : – , . [ ] n. t. courtois and l. bahack. on subversive miner strategies and block withholding attack in bitcoin digital currency. corr, abs/ . , . [ ] i. eyal. the miner’s dilemma. in security and privacy (sp), ieee symposium on, pages – . ieee, . [ ] i. eyal and e. g. sirer. majority is not enough: bitcoin mining is vulnerable. in financial cryptography and data security, pages – . springer, . [ ] k. hill. bitcoin is not broken. forbes, . http://www.forbes.com/sites/kashmirhill/ / / /bitcoin-is-not-broken/# d a . [ ] n. houy. the economics of bitcoin transaction fees. working paper gate - . halshs- ., . [ ] b. johnson, a. laszka, j. grossklags, m. vasek, and t. moore. game-theoretic analysis of ddos attacks against bitcoin mining pools. in proceedings of the first workshop on bitcoin research, . [ ] a. kiayias, e. koutsoupias, m. kyropoulou, and y. tselekounis. blockchain mining games. in acm conference on economics and computation (ec), . [ ] j. a. kroll, i. c. davey, and e. w. felten. the economics of bitcoin mining, or bitcoin in the presence of adversaries. in proceedings of the twelfth annual workshop on the economics of information security (weis), . [ ] n. littlestone and m. k. warmuth. the weighted majority algorithm. inf. comput., ( ): – , . [ ] l. luu, j. teutsch, r. kulkarni, and p. saxena. demystifying incentives in the consensus computer. in proceedings of the acm conference on computer and communications security (ccs), . [ ] a. miller and r. jansen. shadow-bitcoin: scalable simulation via direct execution of multithreaded applications. in proceedings of the eighth workshop on cybersecurity experimentations and test (cset), . [ ] m. möser and r. böhme. trends, tips, tolls: a longitudinal study of bitcoin transaction fees. in workshop on bitcoin research, pages – , . [ ] s. nakamoto. bitcoin: a peer-to-peer electronic cash system, . [ ] k. nayak, s. kumar, a. miller, and e. shi. stubborn mining: generalizing selfish mining and combining with an eclipse attack. in ieee european symposium on security and privacy (euros&p), . [ ] r. peter. a transaction fee market exists without a block size limit. . [ ] m. rosenfeld. analysis of bitcoin pooled mining reward systems. corr, abs/ . , . [ ] a. sapirshtein, y. sompolinsky, and a. zohar. optimal selfish mining strategies in bitcoin. in financial cryptography and data security, . [ ] m. vasek, m. thornton, and t. moore. empirical analysis of denial-of-service attacks in the bitcoin ecosystem. in proceedings of the first workshop on bitcoin research, . appendix a. mining gap this appendix contains a theoretical analysis of the mining gaps referenced in section . . let’s consider the following simplified model: there is one style of “rig” available to miners, which costs p btc per time unit in electricity to run. let’s first analyze what effect this has in the fixed reward model, where each block found is worth one btc, and the difficulty is adjusted so that the time between successive blocks is one unit in expectation. then if there are k rigs in the network, the expected reward from running a rig for one time unit is exactly /k, whereas the cost in electricity is p. so the network is sustainable as long as /k ≥ p, or k ≤ /p. in other words, the cost of electricity imposes a hard cap on the total effective mining power of /p rigs worth. of course, this can always be adjusted if necessary by changing the fixed reward per block. also, it is important to point out that as long as k ≤ /p, the effective hash power in the network will be k rigs worth. now let’s consider what happens in the transaction fee model, where transaction fees arrive continuously at a rate of per time unit. miners will always turn off their rigs (/coin-hop) immediately after a block is found, because the instantaneous expected reward of running a rig is , but the cost is non-zero. if the current effective hash power in the network is c rigs worth, then the miner needs to wait until x = cp transaction fees have arrived in order for mining to be profitable. now, assuming that miners are cleverly turning their rigs on and off at the right times, how many rigs must be in the network in order to attain an effective hash power of c? the rigs are all off for cp units of time, and then all k of them are turned on, and the expected time to find a block is unit of time. this means that the expected time to find a block with all k units running must be −cp (due to difficulty adjustment), whereas the expected time to find a block with c units running is (because the effective hash power is c). finally, we observe that for a fixed difficulty, if x denotes the number of rigs running, and yx denotes the expected time for x rigs to find a block, then x ·yx = x ·yx for all possible number of rigs x ,x . together, this yields the following equation: k · ( − cp) = c · ⇒ k = c − cp . what do we learn from this? first, we see that no c ≥ /p can possibly be supported, just like in the fixed-reward model. on the other hand, we see that it takes an additional factor of −cp rigs in order to get the effective hash power of c ≤ p rigs. as c → /p, the maximum possible effective hash power, this ratio approaches ∞! more quantitatively, if we plug in c = x/p for x < , we see that the blow-up is −x . this means the following: in the transaction fees model, to obtain an x fraction of the maximum possible effective hash power, a multiplicative blow-up of −x rigs are necessary. recall that in the fixed-reward model, no blow-up is necessary. we can also reason in the other direction: for a fixed k number of rigs in the network, what is the effective hash rate in the fixed reward model versus the transaction fees model with mining gaps? in the fixed reward model, this is easy: it’s just min{k, /p}. in the transaction fees model, for a fixed k, we need to solve for the c such that k = c −cp . this is: k −kcp = c ⇒ c = k + pk so for fixed k, the effective mining power of k rigs degrades by a factor of +pk , which is always < . note that at k = /p, every rig is % effective in the fixed reward model, whereas the effective mining power is just k/ in the transaction fees model. we can again make a quantitative statement: in the transaction fees model, when the raw hash power in the network is an x fraction of the maximum possible, the effective hash power degrades by a factor of +x . recall that in the fixed rewards model, there is no degradation in effective hash power when x ≤ . b. learning miners in simulator as referenced in section . , we provide two options for learning in our simulator. let’s introduce these with a clear set-up for learning. let there be a set of strategies a learner can use, indexed by k. at each round i ∈ [t], the learner receives/would have received some reward rik ∈ [ , ], which may be arbitrary. the goal is to select a sequence of strategies si guaranteeing: t∑ i= r i si ≥ max k { t∑ i= r i k}− c. in other words, we would like to select a sequence of strategies that does nearly as well as the best strategy, assuming we knew it from the beginning. it is well-known [ ] that setting wik = w i− k ( − �) rik and selecting sik proportional to the weights wik results in a guarantee with c = �t + ln(# strategies)/�. similarly, [ ] shows that even if we don’t learn r i k for figure : illustration of a mining gap. the blue line shows the current p.d.f. of the time to next block. if the block reward by itself is too small to incentivize mining, rational miners will wait until enough transactions have accumulated before starting to mine. this will lead to a p.d.f. of a different shape (red line). note that in either scenario the mean time to the next block is minutes (green line) strategies k that we didn’t choose in round i, there is an algorithm (namely, exp , see [ ] for description) that guarantees c = �t + # strategies · ln(# strategies)/�. so option one in our simulator is just to run exp in earnest: whenever a miner uses some strategy k during game i, they learn their payoff and update their weights accordingly. still, mwu converges faster, so it would be nice if we could learn how much payoff the miner would have received if they used strategy k during game i for all k, but this is computationally very expensive as it essentially requires us to rerun the entire game for all miners and strategies k (thereby becoming more expensive than just running the additional games to let exp converge). instead, we make the following observation: even if this miner is not using strategy k during game i, maybe some other miner is - could we use that miner’s payoff instead of recomputing exactly what payoff this miner would have received? the answer is of course we can, we just won’t get a theoretical guarantee like if we used mwu in earnest. the payoff from different miner perspectives are of course different, but not wildly so. specifically, the difference is that miner is facing opponents , , . . ., whereas miner faces opponents , , . . .. if miner and miner use different strategies in round i, then strategy k would yield slightly different rewards when used by each of them. with many small miners, this difference should be small, so we include this learning option as it seems to converge faster than exp , even though there is no theoretical guarantee. specifically what we mean is the following: instead of learning the payoff that the miner would have received had they used strategy k during round i, they simply take the average payoffs of all miners that used strategy k during round i instead. it is certainly possible that improvements to the learning aspect of the simulation are possible (and we encourage future work on this aspect once the simulator is open-source), but we note that the current implementations sufficed for the settings we studied. c. proof of theorem . below is a complete proof of theorem . . some quick notation: for an increasing function f(·), we’ll denote by f− (x) = min{y|f(y) ≥ x}. if no such y exists, then we’ll denote f− (x) = +∞. first, we make an extremely useful observation about when miners will receive payment for their blocks. essentially, because miners only consider mining on mosth or mosth− , once a block is a predecessor of both such blocks, it is guaranteed to be in the eventual longest chain. observation . as long as miners only consider mining on top of blocks mosth or mosth− , a miner receives eventual payment for mining a block if and only if the next block found chooses to continue her chain instead of undercutting. proof. because miners only consider chains mosth− or mosth, immediately after producing a new block b, b is in the longest chain. either b goes on top of mosth− , in which case it is in a chain of length h, which is the longest. or it goes on top of mosth, which creates a new longest chain of length h + . let hnew denote the new length of the longest chain (h if the miner undercut, and h + if she continued). either the newly minted b is equal to mosthnew , or it isn’t. if it isn’t, then neither the next miner, nor any other miner in the future will ever mine on top of it, because there is a “better” chain of length hnew to mine on top of instead. if it is, then the next miner will either undercut or continue. if the next miner continues, then b is now equal to mosthnew and the predecessor of mosthnew+ . this means that all future miners will continue a chain containing b, and therefore it will certainly be in the eventual longest chain. if instead the next miner undercuts, then there will be a new chain of length hnew that leaves more available btc, meaning that mosthnew does not contain b as a predecessor. mosthnew− clearly does not contain b either, as b was mined on top figure : one example of the function that function forking miners might use that leads to an equilibrium. recall, the function is f(x) = x on the range [ ,y], the −w (−yex− y) on [y, y−ln(y)− , and on [ y−ln(y)− ,∞). of this chain. so b is contained in neither mosthnew nor mosthnew− , and therefore no future miners will ever consider a chain containing b. in conclusion, whether or not a miner receives payment for block b depends entirely on whether or not the subsequent miner decides to mine on top of b or not. we now want to figure out a best response for an individual non-atomic miner, conditioned on all other miners using functionfork(f). so we need to figure out the probability that a miner will get undercut when authorizing b btc in transactions, assuming that all other miners are using functionfork(f). note that as more and more new btc of transactions arrive, other miners become less inclined to undercut. what we need to figure out is exactly how many new btc of transactions need to arrive before the next miner switches from preferring to undercut to preferring to continue the longest chain. lemma c. . if a miner authorizes b btc of transactions on mosti (of course, i will be in {h − ,h}), then other functionfork(f) miners will try to undercut her until max{ ,f− (b) +b−rem(mosti)} new btc of transactions arrive (rem(mosti) taken at the instant that the miner authorizes her block). therefore, the expected btc obtained by authorizing b btc of transactions is be−max{ ,f − (b)+b−rem(mosti)}. proof. first, observe that because the miner chooses to build upon mosth− or mosth, then the chain containing their block is the new mosth, and that same chain minus their block is the new mosth− . so the gap between the number of available btc in mosth versus mosth− (gaph = rem(mosth− ) −rem(mosth)) for the next miner is exactly b. now, immediately when the miner publishes her block, there are rem(mosti) btc of transactions available on mosth− , and rem(mosti) −b btc of transactions available on mosth. so at this point, other miners would choose to undercut iff f(rem(mosti)−b) < b. as more new btc of transactions arrive (call it x), the other miners would choose to undercut iff f(rem(mosti) −b + x) < b. as f(·) is increasing, we can look for the minimum x where this ceases hold, which is exactly when rem(mosti) −b + x = f− (b), or x = f− (b) + b −rem(mosti). we now prove three corollaries of lemma c. regarding what choices of b might possibly be optimal. corollary c. . if every other miner is playing functionfork(f), then the optimal choice b∗ of btc to authorize when building upon chain mosti satisfies • b∗ ∈ argmaxb∈[ ,gaph]{be −max{ ,f− (b)+b−rem(mosth− )}}, if i = h− . • b∗ ∈ argmaxb∈[ ,rem(mosth)]{be −max{ ,f− (b)+b−rem(mosth− )}}, if i = h. proof. this is an immediate corollary of lemma c. , combined with the fact that a miner who chooses to undercut can authorize at most gaph btc, while a miner who chooses to continue can authorize at most rem(mosth). corollary c. . if b ≥ b , and b e−b −f − (b ) ≥ b e−b −f − (b ), then for all x, the expected reward from authorizing b btc in transactions is at least as large as the expected reward from authorizing b btc when rem(mosth) = x. proof. there are two cases to consider. first, maybe x > b +f − (b ) (the miner guarantees that she is not undercut by authorizing b btc in transactions). in this case, because b ≥ b and f− (·) is increasing, we clearly have x > b +f− (b ) as well, meaning that the expected reward by authorizing b btc is exactly b , and that the expected reward by authorizing b btc is exactly b , by lemma c. . as b ≥ b , the reward from b is at least as large. in the second case, maybe x ≤ b + f− (b ) (the miner is undercut with positive probability by authorizing b btc in transactions). in this case, the reward from authorizing b btc is b e −b −f− (b )+x, by lemma c. . also by lemma c. , the reward from authorizing b btc is b e −max{ ,b +f− (b )−x} ≤ b ex−b −f − (b ) = ex ·b e−b −f − (b ). by hypothe- sis, this is upper bounded by exb e −b −f− (b ), which is exactly the reward obtained by authorizing b btc. so authorizing b btc provides at least as much reward. in both cases, we see that authorizing b btc is at least as good as b . corollary c. . if b ≥ b , and b e−b −f − (b ) ≤ b e−b −f − (b ), then for all x ≤ b + f− (b ), the expected re- ward from authorizing b btc in transactions is at least as large as the expected reward from authorizing b btc when rem(mosth) = x. proof. by hypothesis, x ≤ b + f− (b ) (the miner is undercut with positive probability by authorizing b btc in transactions). therefore, the expected reward from authorizing b btc is b e −b −f− (b )+x. as b > b and f − (·) is increasing, we have x ≤ b + f− (b ) as well. this means that the expected reward from authorizing b btc is b e −b −f− (b )+x. by hypothesis, this is less than the reward of authorizing b . we now recall quickly properties of w (·): • the domain of w (·) is [− /e,∞) and the range is [− ,∞). • w (·) is increasing. • w (xex) = x for all x ∈ [− ,∞). we will need to make use of some technical facts about f(·) (our specific choice from the statement of theorem . ) that we first prove below. fact . f(x) ≤ x everywhere. proof. clearly, f(x) ≤ x on [ ,y]. also clearly, f(x) ≤ x on [ y − ln(y) − ,∞) iff f( y − ln(y) − ) ≤ y − ln(y) − . so we just need to check the range [y, y− ln(y)− ]. the derivative of w (x) = w (x)x(w (x)+ ) . so the derivative of f on this range is (by the chain rule): − w (−yex− y) −yex− y(w (−yex− y) + ) ·−yex− y = −w (−yex− y) + w (−yex− y) = f(x) −f(x) as f(·) is increasing and positive on [y, y−ln(y)− ] (because of the form for f′(x) we just derived above - not all positive, increasing f(·) have increasing derivatives), this means that f′(·) is also increasing and positive on [y, y− ln(y) − ]. as the derivative of x is constant ( ), this means that if f(x) > x anywhere on this interval, f( y − ln(y) − ) > y − ln(y) − or f(y) > y. we can clearly see that f(y) = −w (−ye−y) = y, and f( y− ln(y)− ) = −w (−ye y−ln(y)− ) = −w (− /e) = . so we can’t have f(y) > y, and we have f( y−ln(y)− ) > y−ln(y)− if and only if y−ln(y)− < , which is the same as y− ln(y) < . as this is exactly the range of y we disallow, we see that we also can’t have f( y− ln(y)− ) > y− ln(y)− for any y we allow. therefore, f(x) ≤ x everywhere. fact . be−b−f − (b) = • be− b,b ∈ [ ,y]. • ce− c,b ∈ [y, ]. • ,b > . proof. we first observe that f− (b) = b for all b ∈ [ ,y], which immediately proves the first bullet. we next observe that f− (b) = +∞ for all b > , which immediately proves the last bullet. for the middle bullet, observe that: −w (−ye( y+ln(z/y)−y)− y) = −w (−yeln(z/y)−z) = −w (−ze−z) = z. note that the last equality is due to the fact that w (·) is the inverse of xex. this proves that f− (b) = y + ln(b/y)−b when b ∈ [y, ] and completes the middle bullet. corollary c. . if y ∈ ( , / ], then be−b−f − (b) is strictly increasing on [ ,y] and constant on [y, ]. proof. be−b−f − (b) is clearly constant on [y, ], so we just need to confirm that it’s strictly increasing on [ ,y]. the derivative of be− b is ( − b)e− b, which is strictly positive on [ , / ] (and therefore on [ ,y] for all y ≤ / ), as desired. proof of theorem . : we want to invoke corollary c. combined with corollary c. . together, these immediately say that for any ≥ b > b ≥ , it is at least as good to authorize b btc as b . as authorizing b > btc always results in expected reward of , this immediately implies by corollary c. that for any b, min{ ,b}∈ argmaxb∈[ ,b]{be −max{ ,b+f− (b)−rem(mosti)}}. now, we also want to invoke corollary c. to show that there may exist other maximizers as well if rem(mosti) ∈ [y, y − ln(y) − ]. note that f(·) is strictly increasing in this range, meaning that f− (f(rem(mosti))) = rem(mosti). therefore, we see that rem(mosti) ≤ f(rem(mosti)) + f− (f(rem(mosti))) (the miner will be undercut with positive probability when authorizing f(rem(mosti)) btc) on this entire range. together with corollary c. , this means that the hypotheses of corollary c. are satisfied taking b = f(rem(mosti)) and any b ≥ b . combined with the reasoning above, this means that when rem(mosti) ∈ [y, y − ln(y) − ] and b ≥ f(rem(mosti)), we also have f(rem(mosti)) ∈ argmaxb∈[ ,b]{be −max{ ,b+f− (b)−rem(mosti)}}. therefore, when b = rem(mosth), we recover that f(rem(mosth)) is an optimal choice of btc to authorize when continuing. when b = gaph, we recover that min{ ,gaph,f(rem(mosth− ))} = min{gaph,f(rem(mosth− ))} is an optimal choice of btc to authorize when undercutting. so functionfork(f) correctly chooses how many btc to authorize when continuing and when undercutting, we just need to check that it also chooses when to undercut and when to continue. if gaph > f(rem(mosth)), then min{gaph,f(rem(mosth− ))}≥ f(rem(mosth)) as well, and we can invoke corollary c. with b = min{gaph,f(rem(mosth− ))} and b = f(rem(mosth)). by the argument above, because ≥ b ≥ b , the hypotheses of corollary c. are satisfied, and the expected re- ward is at least as high when authorizing b as b , so undercutting is at least as good as continuing. similarly, if f(rem(mosth)) ≥ gaph, then f(rem(mosth)) ≥ min{gaph,f(rem(mosth− ))}. so we may again invoke corollary c. , this time with b = f(rem(mosth)) and b = min{gaph,f(rem(mosth− ))}. so now we have shown that the functionfork(f) correctly chooses how many btc to authorize when continuing and when undercutting, and also chooses correctly whether to continue or undercut. so it is an equilibrium. the last part we need to reason about is the connection to random walks. observe that the number of transaction fees grows continuously at a rate of per unit. every time a block is found, it drops by at most . so definitely the backlogged transactions will be at least as bad as a random walk that drops by exactly (because it will only drop further). lemma c. below proves that with constant probability, the number of blocks found in a time interval of length n + √ n is at most n. when this occurs, there is a backlog of at least √ n transactions at time n + √ n. therefore, the expected backlog is at least Θ( √ n) (in fact, it is exactly Θ( √ n)). during this time, new transactions take Θ( √ n) time steps before they are included in a block. before proving lemma c. , we recall the berry-esseen theorem: theorem c. (berry-esseen). let x , . . . ,xn be i.i.d. random variables with mean , e[x i ] = σ , e[x i ] = ρ. then for all x: pr[ ∑ i xi σ √ n ≥ x] − Φ(x) = o( ρ σ √ n ), where Φ(x) denotes the probability that a gaussian random variable with mean and standard deviation exceeds x. lemma c. . define xi to be an exponential random variable with mean . then: pr[ n∑ i= xi > n + √ n] = Θ( ). in particular, this implies that probability that fewer than n blocks are found in n + √ n time steps is Θ( ). proof. define yi = xi − . then the yi are i.i.d. random variables with mean , e[y i ] = σ < , and e[y i ] = ρ < . plugging into berry-esseen (stated below), we get: pr[ n∑ i= yi > √ n] = pr[ ∑n i= yi σ √ n > σ ] ≥ Φ( σ ) −o( √ n ). as σ is a constant independent of n, Φ(σ) is also independent of n, so Φ( σ ) −o( √ n ) = Θ( ), as desired. d. when default mining is an equilibrium for non-atomic miners in the absence of latency, default mining is an equilibrium for non-atomic miners regardless of the reward model, and the reasoning is simple: if you do anything except extend the unique longest chain, your block will be orphaned and you will receive reward zero. if you wait to publish your block, you risk losing the option to publish it without being orphaned. all other miners ignore the transactions included in your block when deciding where to extend, so you may as well include as many transactions as possible. in the presence of latency, forks will naturally occur, so pettycompliant outperforms defaultcompliant in the trans- action fees model. in the fixed reward model, defaultcompliant remains an equilibrium under quite general models of latency (still assuming non-atomic miners). consider, for instance, any model of latency with the following property. when- ever miner m finds a block, and miner m′ finds a block at a later time, we have bm ⊆ bm′, where bm denotes the set of blocks that miner m had heard of when they found their block. in other words, by the time miner m′ solves their block, they have become aware of at least every block that m was aware of when they solved their block earlier (but perhaps not m’s block, nor any blocks that m was not herself aware of). it is easy to see that simple latency models (such as all announcements being grouped into chunks of λ seconds) have this property, as well as much more general latency models. it is also easy to see that in the transaction fees model, the simple latency model where announcements are grouped into chunks of λ seconds is rich enough so that defaultcompliant is strictly outperformed by pettycompliant and therefore not an equilibrium. proposition d. . when miners are non-atomic, even in the presence of any latency of the form described above, it is an equilibrium for every miner to use defaultcompliant. proof. the proof is actually very straight-forward: assuming that all other miners are defaultcompliant, mining anywhere except on top of a longest chain guarantees that your block will be orphaned and you will receive a reward of zero (because our latency assumptions guarantee that the next miner and all future miners will have heard about the blocks you chose to undercut before yours, and they are all defaultcompliant). so the only choices are how to tie-break among multiple longest chains. but this choice neither affects your rewards (they are fixed!), nor the likelihood that your block will be chosen by the next miner (as this depends only on how quickly they hear about your block and not on its contents). so tie-breaking in favor of the earliest chain is at least as good as any other tie-breaking rule. finally, it is also easy to see that publishing as soon as possible is optimal, as this maximizes the likelihood that your block is chosen to be extended. the point of proposition d. is again just to contrast the difference between transaction fees and fixed rewards. in the non- atomic regime, even in quite general latency models, defaultcompliant mining is an equilibrium in the fixed-reward model. the proof is simple and matches exactly our intuition for why defaultcompliant should make sense. but in the transaction fees model, whenever there exists a possibility for forks, defaultcompliant is strictly outperformed by pettycompliant, and the space of equilibria is therefore much more complex. in particular, it would be interesting for future work to identify an equilibrium for non-atomic miners in the transaction fees model in any non-trivial latency model. e. selfish mining e. classic selfish mining with transaction fees here we provide details on how to analyze selfish mining in the transaction fee regime. recall that eyal and sirer [ ] have already computed ps for all s, the probability that the block chain is in state s. below we compute fs for all states s, the probability that a transaction winds up with the selfish miner conditioned on that transaction arriving while the blockchain is in state s. computing f : let’s consider the possible outcomes when a transaction arrives in state : • if a default miner mines the next block, it will contain this transaction, and this block will definitely be in the eventual longest chain. this happens with probability ( −α). • alternatively, the selfish miner could find the next block. if the selfish miner finds the next block, they will include the transaction in their block, but they keep this block private after they find it. this happens with probability α, but this block is not guaranteed to make it into the eventual longest chain, yet. • from here, maybe the selfish miner finds the next block as well. this happens with probability α. once this happens, both blocks are guaranteed to be in the eventual longest chain. so this event contributes a probability α that the transaction winds up in the selfish miner’s block. • alternatively, a default miner might find the next block, which triggers a race. this happens with probability ( −α). both racing blocks contain the transaction being considered, so whoever wins the race receives the corresponding transaction fees. the selfish miner wins the race with probability α + γ( −α), so this event contributes α( −α)(α + γ( −α)) in total. therefore, we see that: f = α + α( −α) (α + γ( −α)) ( ) computing f ′: if a new transaction is announced in state ′, then the next block found is certainly contained in the eventual longest chain because it is always announced and every miner chooses to mine on top of it. so this transaction is won by whichever miner finds the next block, which is the selfish miner with probability α. therefore: f ′ = α ( ) f : consider now a transaction announced in state , and where it might wind up: • if the selfish miner finds the next block, they will have a private chain of length , in which case both blocks are guaranteed to make it into the final block chain. therefore, this transaction will certainly wind up in a block mined by the selfish miner. this happens with probability α. • alternatively, the rest of the network might find the next block. this happens with probability ( −α). but we don’t yet know whether or not this block will make it in the eventual longest chain because this triggers the “race,” and puts us in state ′. note though that the racing selfish block does not contain this transaction that arrived once we were already in state . therefore, even if the selfish miner wins the race, but because a default miner chose their block, the selfish miner will not get this transaction. so the only way for the selfish miner to win this transaction is to find the block that ends the race. this happens with probability α. f = α + ( −α)α = α( −α). ( ) computing fi: finally, consider a transaction arriving to the system in state i, i > . in these states, it is easier to consider what must happen in order for the transaction to not end up in a block the selfish miner owns. for the transaction to wind up in a default miner’s block, it needs to be the case that the selfish miner releases their entire private chain before mining a new block (which would contain this transaction). this is because any blocks found by default miners before this trigger are all orphaned. for a release to be triggered, a default miner must find each of the next i− blocks, which happens with probability ( −α)i− . if this happens, we still don’t know where this transaction winds up, because each of the i− blocks found will be orphaned. but we have now returned to state , and the remainder of the analysis concludes as if the transaction had been announced during state . so the probability that a default miner winds up with a transaction arriving in state i is ( −α)i− ( −f ), and therefore: fi = − (( −α)i− ( −f )) ( ) summing everything together, we get the following: theorem e. . if all other miners remain defaultcompliant, a selfish miner in the transaction fees model with an α ∈ ( , . ) fraction of the mining power and racing parameter γ ∈ [ , ] achieves reward reward(α,γ) with: reward(α,γ) = α − α + α − α + γ(α− α + α − α + α ) α − α + proof. the only remaining part of the proof is summing p f + p ′f ′ + p f + ∑ i> pifi. p f = − α α − α + · ( α + α( −α)(α + ( −α)γ) ) = α − α + α + αγ − α γ + α γ − α γ α − α + p ′f ′ = ( −α)(α− α ) α − α + ·α = α − α + α α − α + p f = α− α α − α + ·α( −α) = α − α + α α − α + pifi = ( α −α ) i− α− α α − α + −αi− ( −f ) α− α α − α + ∑ i> ( α −α ) i− = α − α ⇒ ∑ i> pifi = α α − α + − ∑ i> α i− ( −f ) α− α α − α + ∑ i> α i− = α −α ⇒ ∑ i> pifi = α α − α + − α( −α + α( −α) ( −γ))(α− α ) ( −α) α − α + = α −α( + α( −α)( −γ))(α− α ) α − α + . = α −α (α− α −α + α −γα + γα + γα − γα ) α − α + = α + α − α + γα − γα + γα α − α + the proof concludes by just summing the four terms. e. improved selfish mining with a cutoff in this section, we complete our analysis of our improved selfish mining with a cutoff. in order to keep the analysis of this strategy tractable, we choose to slightly tweak our analysis (but our theory-matches-simulation plot in figure . shows that this tweak is essentially irrelevant). the only tweak we make is that right after the selfish miner releases a chain of length two simultaneously, they immediately publish the next block (if they find it), and then return to selfish mining. in the language of eyal and sirer, this is like adding an additional state ′′ where the selfish miner honestly mines. no matter who finds a block in this state, the next state is . the only transition into this state is when the honest portion of the network finds a block when the selfish miner has a lead of shows an updated markov chain with state ′′. again, note that this modification is just for analysis. the selfish mining with cutoff that is implemented in our simulator is as described in the body. in order to calculate the selfish miner’s expected revenue, we must again calculate the probability of the system being in any given state, and the chance that a transaction arriving to the system while in one of these states eventually ends up in a block mined by the selfish miner. from looking at the state transitions in figure , we can derive the following formulas relating the probabilities of being in each state: piα = pi+ ( −α) ( ) =⇒ pi = ( α −α ) i− p ( ) p ′′ = ( −α)p = αp ( ) p ′ = ( −α)p ( ) figure : state machine for selfish mining with a cutoff, introducing state ′′. p = p α( −e−β) ( ) we also know that the system is guaranteed to be in some state, which means the following. p + p ′ + p ′′ + i=∞∑ i= pi = ( ) which together imply that p = α( α− )(eβ − ) α (eβ − ) + α−eβ ( ) with equations - , this gives expressions for the probabilities of all the possible states the system could be in. now we need to compute the probability that a transaction that arrives when the system is in state s winds up with the selfish miner. unfortunately, this is not a clean approach: because in state the selfish miner will sometimes publish and sometimes hide their block, depending on how much time has passed since the last block was found, we need actually to introduce a continuum of states for each amount of time x for the size of the block that is building during state . so let’s define a new variable, p (x) which denotes the probability that the system is in state and x units of time have passed since the system entered state . because we introduced this new state ′′, whenever we enter state , the initial block is empty. therefore, the probability that we wind up in state with a block of size at least x is p e −x, and we have: p (x) = p e −x dx ( ) we must now calculate the associated fs (probability that a transaction winds up with the selfish miner conditioned on arriving during state s) in order to calculate the expected fraction of the rewards claimed by the selfish miner. computing f (x). if a new transaction arrives in state , let’s look at where this transaction might wind up. note that this depends on how long it’s been (x) since the last block was found. • if the next block is found by the honest miner, then this transaction will certainly wind up with the honest miners. this happens with probability −α. • if x ≥ β, and the next block is found by the selfish miner, then it certainly winds up with the selfish miner. this happens with probability α. • if x < β, and the next block is found by the selfish miner after time β−x as passed, then it certainly winds up with the selfish miner. this happens with probability αe−β+x. • if x < β, and the next block is found by the selfish miner within β −x time, then this transaction isn’t determined yet because the selfish miner chooses to hide that block. but this happens with probability α( −e−β+x). • if both of the next two blocks are found by the selfish miner, than this transaction is contained in a block of the selfish miner that will certainly be included in the eventual longest chain. this happens with probability α ( −e−β+x). • if the next block is found by the selfish miner, followed by a block by the honest miner, then a race is triggered. this transaction is contained in the two racing blocks, so whoever wins the race gets this transaction. the race occurs with probability α( −e−β+x)( −α), and the selfish miner wins the race with probability α + ( −α)γ. so in total, we see that f (x) = α, when x ≥ β, and f (x) = αe−β+x + α ( −e−β+x) + α( −α)( −e−β+x)(α + ( −α)γ) if x ≤ β. computing f ′. if a new transaction arrives when there are two chains competing of the same length, then the next block found is certainly contained in the eventual longest chain (because both miners choose to mine on top of it). so if the next block is found by the selfish miner, this transaction is won by him. otherwise, it’s won by the honest miner. so we have f ′ = α. computing f ′′. if a new transaction arrives during the state ′′, the next block found is certainly contained in the eventual longest chain again. so we again have f ′′ = α. computing f . if a new transaction arrives when the sefish miner has a private chain of length , let’s consider where the transaction might wind up: • if the next block is found by the selfish miner, then this transaction is contained in a block of the selfish miner that will certainly be included in the eventual longest chain. this happens with probability α. • if the next block is found by the honest miner, then this triggers a release of the private block and a race. but, the racing selfish block does not contain this transaction, whereas the racing honest block does. so if the racing honest block wins, the honest miner gets this transaction. if the racing selfish block wins, whoever finds the block that ends the race gets this transaction. so the selfish miner gets the transaction in this case only if he finds the block that ends the race. this happens with probability ( −α)α. so we see that f = α + ( −α)α = α( −α). computing fi, i > . if a new transaction arrives when the selfish miner has a private chain of length i > , let’s again consider where this transaction might wind up: • if the next block is found by the selfish miner, then this transaction is contained in a block of the selfish miner that will certainly be included in the eventual longest chain. this happens with probability α. • if the next i− blocks are all found by the honest miner, then this triggers a release of the private chain, and all those blocks found by the honest miner are immediately ignored. at this point, the transaction has still not been included in any block, so it is as if the transaction arrived in state ′′. so the selfish miner gets this transaction with probability f ′′ in this case. • if any of the next i− blocks are found by the selfish miner, then this block is certainly included in the eventual longest chain, because it is found when the selfish miner has a lead of at least two. so we see that the only way the selfish miner might possibly lose the transaction is if each of the next i− blocks are found by the honest miner, and even in this case the selfish miner still wins the transaction with probability f ′′ = α. so the honest miner only wins this transaction with probability ( −α)i− ( −α), and we have fi = − ( −α)i. now, we just have to sum/integrate over all states and success probabilities to compute the fraction of transactions that go to the selfish miner. f ′p ′ = α( −α)p . f ′′p ′′ = α p . f p = α( −α)p . fipi = ( − ( −α)i)αi− p ( −α)i− , i > . f (x)p (x) = p e −xdx −e−β , x ≥ β. f (x)p (x) = p e −xdx(e−β+x + α( −e−β+x) + ( −α)( −e−β+x)(α + ( −α)γ)) −e−β , x ≤ β. ∑ i> αi− ( −α)i− = α − α . ∑ i> α i− = α −α . ⇒ ∑ i> fipi = p (∑ i> αi− ( −α)i− − ( −α) ∑ i> α i− ) = ( α − α −α ) p = α p − α . ∫ x≥β f (x)p (x) = p −e−β ∫ x≥β e −x dx = e−βp −e−β . ∫ x= β f (x)p (x) = ∫ β x= p −e−β (( e −β −αe−β − ( −α)(α + ( −α)γ)e−β ) + ( αe −x + ( −α)(α + ( −α)γ)e−x )) dx. = p βe −β( −α− ( −α)(α + ( −α)γ)) −e−β + p ( −e−β)(α + ( −α)(α + ( −α)γ)) −e−β = p ( βe−β( −α)( −α− ( −α)γ) + ( −e−β)(α + ( −α)(α + ( −α)γ)) ) −e−β . summing everything together, we then get:∫ ∞ p (x)f (x) + ∑ i> pifi + p ′f ′ + p ′′f ′′ + p f = ( βe−β( −α)( −α− ( −α)γ) −e−β + α + ( −α)(α + ( −α)γ) + e−β −e−β + α − α + α−α ) p . this can be further simplified to yield the bound provided in the paper. ( βe−β( −α)( −α− ( −α)γ) −e−β + α + ( −α)(α + ( −α)γ) + e−β −e−β + α − α + α−α ) p . = + β( −α) ( −γ) eβ − + α + ( −α)(α + ( −α)γ) + α − α −α . rapid communications rapid communications rapid, but irregular, communications from the frontiers of library technology mac os vs emacs: getting on the right (exec) path finding isbns in the the digits of π software upgrades and the parable of the windows using qr codes in the library a manifesto for the library i'm a shover and maker! lita tears down the walls a (half) year in books the desk set drinking game july book a month challenge: independence june book a month challenge: knowledge anthony hope and the triumph of the public domain may book a month challenge: mother eric s. raymond on proprietary ilss one big library unconference in toronto april book a month challenge: beauty thinking about dates on to-do list web sites the most important programming language i've learned building systems that support librarians book a month challenge for march: craft social aggregators on keeping a reading journal bam challenge: heart where the users are my top technology trends slides binance: fiat off-ramps keep closing, reports of frozen funds, what happened to catherine coley? – amy castor primary menu amy castor independent journalist about me selected clips contact me blog subscribe to blog via email enter your email address to subscribe to this blog and receive notifications of new posts by email. join , other followers email address: subscribe twitter updates fyi - i'm taking an actual vacation for the next week, so i'll be quiet on twitter and not following the news so mu… twitter.com/i/web/status/ …  day ago rt @davidgerard: news: the senate hates bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network…  day ago rt @franciskim_co: @wublockchain translation https://t.co/hpqefljhpu  day ago rt @wublockchain: the chinese government is cracking down on fraud. they posted a fraud case involving usdt on the wall to remind the publi…  day ago rt @patio : the core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us…  days ago recent comments cryptobuy on binance: fiat off-ramps keep c… steve on binance: a crypto exchange run… amy castor on el salvador’s bitcoin plan: ta… amy castor on el salvador’s bitcoin plan: ta… clearwell trader on el salvador’s bitcoin plan: ta… skip to content amy castor binance: fiat off-ramps keep closing, reports of frozen funds, what happened to catherine coley? last thing i remember, i was running for the door. i had to find the passage back to the place i was before. relax,” said the night man. “we are programmed to receive. you can check out any time you like, but you can never leave.” ~ eagles binance customers are becoming trapped inside of binance — or at least their funds are — as the fiat exits to the world’s largest crypto exchange close around them. you can almost hear the echoes of doors slamming, one by one, down a long empty corridor leading to nowhere.  in the latest bit of unfolding drama, binance told its customers today that it had disabled withdrawals in british pounds after its key payment partner, clear junction, ended its business relationship with the exchange. clear junction provides access to faster payments through a uk lender called clear bank. faster payments is a major uk payments network that offers near real-time transfers between the country’s banks — the thing the us federal reserve hopes to get with fednow. in a statement on its website on monday, clear junction said: “clear junction can confirm that it will no longer be facilitating payments related to binance. the decision has been made following the financial conduct authority’s recent announcement that binance is not permitted to undertake any regulated activity in the uk.  we have decided to suspend both gbp and eur payments and will no longer be facilitating deposits or withdrawals in favor of or on behalf of the crypto trading platform. clear junction acts in full compliance with fca regulations and guidance in regards to handling payments of binance.”  the financial conduct authority, or fca, ruled on june that binance cannot conduct any “regulated activity” in the uk. binance downplayed the ruling at the time, telling everyone the fca notice related to binance markets ltd and had “no direct impact on the services provided on binance.com.” binance waited a day after learning it was cut off by clear junction before emailing its customers and telling them that the suspension of payments was temporary.  this means that there is now no way for uk customers to withdraw gbp from @binance. this came with zero advance warning. pic.twitter.com/lzmuos so — crypto cnut (@cryptocnut) july , “we are working to resume this service as soon as we can,” binance said. it reassured customers they can still buy crypto with british pounds via credit and debit cards on the platform.    this is the second time in recent weeks that binance customers have been frozen out of faster payments. they were also frozen out at the end of june. a few days later, the service was restored — presumably when binance started putting payments through clear junction. i am guessing that clear bank’s banking partners warned them that binance was too risky and that if they wanted to maintain their banking relationships, they’d better drop them as a customer asap, so they did.  binance talks like all of these issues are temporary snafus that it’s going to fix in due time. in fact, the exchange’s struggle to secure banking in many parts of the world is likely to intensify.  despite numerous claims in the past about taking its legal obligations seriously, binance has been loosey-goosey with its anti-money laundering and know-your-customer rules, opening up loopholes for dirty money to flow through the exchange. now that the word is out, no bank is going to want to touch them.  other developments i wrote about binance’s global pariah status earlier this month. since i published that story, uk high-street banks have moved to ban binance, all following the fca ban. on july , barclays said it is blocking its customers from using their debit and credit cards to make payments to binance “to help keep your money safe.” barclays customers can still withdraw funds from the exchange, however. (since clear junction cut binance off, credit cards remain the only means for uk customers to get fiat off the exchange at this point.) banks crack down on binance even further. just had this text from barclays about banks blocking credit/debit card payments to binance to "keep money safe". turbulent times ahead. #binance #btc #bitcoin @lucaland @realwillybot pic.twitter.com/xdirftyhld — thomas davies (@thomasdavies ) july , two days later, binance told its users that it will temporarily disable deposits via single euro payments area (sepa) bank transfers — the most used wire method in the eu. binance blamed the move on “events beyond our control” and indicated users could still make withdrawals via sepa. on july , santander, another high-street bank, told its customers it was also stopping payments to binance. “in recent months we have seen a large increase in uk customers becoming the victims of cryptocurrency fraud. keeping our customers safe is a top priority, so we have decided to prevent payments to binance following the fca’s warning to consumers,” santander uk’s support page tweeted. hello, in recent months we have seen a large increase in uk customers becoming the victims of cryptocurrency fraud. keeping our customers safe is a top priority, so we have decided to prevent payments to binance following the fca’s warning to consumers. ^tc — santander uk help (@santanderukhelp) july , as i detailed in my earlier story, regulators around the world have been putting out warnings about binance. poland doesn’t regulate crypto markets, but the polish financial supervisory authority also issued a caution about the exchange. its notice included links to all the other regulatory responses. amidst the firestorm, binance has been whistling dixie. on july , the exchange sent a letter to its customers, saying “compliance is a journey” and drawing odd parallels between developments in crypto and the introduction of the automobile.  “when the car was first invented, there weren’t any traffic laws, traffic lights or even safety belts,” said binance. “laws and guidelines were developed along the way as the cars were running on the road.”  do i look more regulated already? 😂 pic.twitter.com/agtl erl h — cz 🔶 binance (@cz_binance) july , frozen funds, lawsuits, and other red flags there’s a lot of unhappy people on r/binanceus right now complaining their withdrawals are frozen or suspended — and they can’t seem to get a response from customer support either. binance.us is a subsidiary of binance holdings ltd. unlike its parent company, binance.us, does not allow highly leveraged crypto-derivatives trading, which is regulated in the us. there's a lot of unhappy people on r/binanceus complaining their withdrawals are frozen or suspended, and they can't get a response from customer support https://t.co/qz cirpsgb pic.twitter.com/wwg j azi — amy castor (@ahcastor) july , a quick look at the subreddit’s weekly support thread reveals even more troubling posts about lost access to funds.  this mirrors gizmodo’s recent findings. the media outlet submitted a freedom of information act request with the federal trade commission asking for any customer issues filed with the ftc about binance. the agency located complaints filed since june of — presumably mainly from binance.us customers.   in an article titled “ angry complaints to the ftc about binance,” gizmodo uncovered some startling patterns. “the first, and arguably most alarming pattern, appears to be people who put large amounts of money into binance but say they can’t get their money out.” also, binance is known for having “maintenance issues” during periods of heavy market volatility. as a result, margin traders, unable to exit their positions, are left to watch in horror while the exchange seizes their margin collateral and liquidates their holdings.   hundreds of traders around the world are now working with a lawyer in france to recoup their losses. in a recent front-page piece, the wall street journal said it suspected that the collective complaints may be the reason why binance has received continuous warnings from many countries. if you still have funds on binance, i would urge you to get them off the exchange now — while you still can. when hoards of people start complaining about lost and frozen funds, it’s usually a sign of liquidity problems.   we saw a similar pattern leading up to february when tokyo bitcoin exchange mt gox bit the dust. and also just before canadian crypto exchange quadrigacx went belly up in early . in both instances, users of those defunct exchanges are still waiting to recoup a portion of their lost funds. bankruptcy cases take a long, long time, and you are lucky to get back pennies on the dollar.  whatever is going on with binance withdrawals, it does appear to be getting larger. the reason this is of interest (despite it being largely anecdotal) is that a very similar pattern preceded the mt. gox episode. could be this is different, but… sure does feel like an echo… — travis kimmel (@coloradotravis) july , finally, where is catherine coley? in another bizarre development, folks on twitter are wondering what happened to catherine coley, the previous ceo of binance.us. she stepped down in may when brian brooks, the former acting comptroller of the currency, took over. nobody has heard from her since. where did she disappear off to?   coley’s last tweet was on april . and both her linkedin profile and twitter account indicate she is still the ceo of binance.us.  catherine coley, heralded as “the lone female chief of a major crypto exchange in an industry dominated by men” by @forbes completely vanishes off the radar & *nobody* in crypto media wants to talk about it? really? okay. how about @thestalwart, @futuretensenow or @andrewrsorkin? https://t.co/wcbq azx — grant gulovsen (@gulovsen) july , she hasn’t been in any interviews or podcasts. she doesn’t respond to dms, and there are no reports of anyone being able to contact her.  a forbes article from last year says that binance.us may have been set up as a smokescreen — the “tai chi entity” — to divert us regulators from looking too closely at binance, the parent company.  binance.us maintains that it is a separate entity. however, forbes under reported that coley was “chosen” by cz, the ceo of binance, which suggests that binance is more involved with binance.us than it claims.  has cz told her to stop talking? what does she know? catherine, if you are reading this, send us a message! (updated july to clarify that barclays still allows customers to withdraw funds via credit card and to note that binance.us is the tai chi entity.) if you like my work, please subscribe to my patreon for as little as $ a month. your support keeps me going. share this: twitter facebook linkedin like this: like loading... barclaysbinancecatherine coleyclear bankclear junctionczfcafinancial conduct authoritysantander posted on july , july , by amy castor in blogging post navigation previous postwhat’s backing circle’s b usdc? we may never know next postbinance: italy, lithuania, hong kong, all issue warnings; brazil director quits one thought on “binance: fiat off-ramps keep closing, reports of frozen funds, what happened to catherine coley?” cryptobuy says: july , at : am honestly binance have been so shady lately, not sure what they are playing at but its not a good look for crypto and new investors wanting to come in the market. reply leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (address never made public) name website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. create a website or blog at wordpress.com %d bloggers like this: none none none none ranti. centuries.org ranti. centuries.org eternally yours on centuries keeping the dream alive - freiheit i did not recall when the first time i heard it, but i remembered it was introduced by my cousin. this song from münchener freiheit became one of the songs i listen a lot. the lyrics (see below) resonate stronger nowadays. keeping the dream alive (single version) cover by david groeneveld: cover by kim wilde: lyrics:freiheit - keeping the dream alive tonight the rain is fallingfull of memories of people and placesand while the past is callingin my fantasy i remember their faces the hopes we had were much too highway out of reach but we had to trythe game will never be overbecause we're keeping the dream alive i hear myself recallingthings you said to methe night it all startedand still the rain is fallingmakes me feel the wayi felt when we parted the hopes we had were much too highway out of reach but we have to tryno need to hide no need to run'cause all the answers come one by onethe game will never be overbecause we're keeping the dream alive i need youi love you the game will never be overbecause we're keeping the dream alive the hopes we had were much too highway out of reach but we had to tryno need to hide no need to run'cause all the answers come one by one the hopes we had were much too highway out of reach but we had to tryno need to hide no need to run'cause all the answers come one by one the game will never be overbecause we're keeping the dream alivethe game will never be overbecause we're keeping the dream alive the game will never be over… lou reed's walk on the wild side if my memory serves me right, i heard about this walk on the wild side song (wikipedia) sometime during my college year in the s. of course, the bass and guitar reef were the one that captured my attention right away. at that time, being an international student here in the us, i was totally oblivious with the lyrics and the references on it. when i finally understood what the lyrics are about, listening to the song makes more sense. here's the footage of the walk on the wild side song (youtube) but what prompted me to write this was started by the version that amanda palmer sang for neil gaiman. i was listening to her cd "several attempts to cover songs by the velvet underground & lou reed for neil gaiman as his birthday approaches" and one of the songs was walk on the wild side. i like her rendition of the songs, which prompted me to find it on youtube. welp, that platform does not disappoint; it's a quite a nice piano rendition. of course, like any other platform that wants you to stay there, youtube also listed various walk on the wild side cover songs. one of them is from alice phoebe lou a singer-songwriter. her rendition using a guitar is also quite enjoyable (youtube) and now i have a new singer-songwriter to keep an eye on. among other videos that were listed on youtube is the one that kinda blew my mind, walk on the wild side - the story behind the classic bass intro featuring herbie flowers which explained that those are two basses layered on top of each other. man, what a nice thing to learn something new about this song. :-) tao read it from the lazy yogi on climate change read the whole poem tv news archive from the internet archive i just learned about the existence of the tv news archive (covering news from until the day before today's date) containing news shows from us tv such as pbs, cbs, abc, foxnews, cnn, etc. you can search by the captions. they also have several curated collections like news clips regarding nsa or snippets or tv around the world i think some of you might find this useful. quite a nice collection, imo. public domain day (january , ): what could have entered it in and what did get released copyright law is messy, yo. we won't see a lot of notable and important works entering public domain here in the us until . other countries, however, got to enjoy many of them first. public domain reviews put a list of creators whose work are entering the public domain for canada, european union (eu), and many other countries (https://publicdomainreview.org/collections/class-of- /.) for those in eu, nice to see h.g. wells name there (if uk do withdraw, this might end up not applicable to them. but, my knowledge about uk copyright law is zero, so, who knows.) as usual, center of study for the public domain from duke university put a list of some quite well-known works that are still under the extended copyright restriction: http://web.law.duke.edu/cspd/publicdomainday/ /pre- . those works would have been entered the public domain if we use the law that was applicable when they were published. i'm still baffled how current copyright hinders research done and published in to be made available freely. greedy publishers… so, thanks to that, usa doesn't get to enjoy many published works yet. "yet" is the operative word here because we don't know what the incoming administration would do on this topic. considering the next potus is a businessman, i fear the worst. i know: gloomy first of the year thought, but it is what it is. on a cheerful side, check the list from john mark ockerbloom on his online books project. it's quite an amazing project he's been working on. of course, there are also writings made available from hathitrust and gutenberg project, among other things. here's to the next days. xoxo for read the full poem light "light thinks it travels faster than anything but it is wrong. no matter how fast light travels, it finds the darkness has always got there first, and is waiting for it."― terry pratchett, reaper man dot-dot-dot more about bertolt brecht poem assistive technology many people would probably think assistive technology (at) are computer software, applications, or tools that are designed to help blind or deaf people. typically, the first thing that one might have in mind was screen readers, braille display, screen magnifier app for desktop reading, or physical objects like hearing aid, wheel chair, or crutches, a lot of people probably won't think glasses as an at. perhaps because glasses can be highly personalized to fit one's fashion style. woodchuck there's a question how much wood would a woodchuck chuck if a woodchuck could chuck wood. obviously, a woodchuck would chuck wood as much wood as a woodchuck could. shrugs droplets the story of the chinese farmer "you'll never know what would be the consequences of misfortune. or, you'll never know what would be the consequences of good fortune." — alan watts persistent bat is persistent for the last couple weeks or so, there's a bat that somehow managed to sneak in and hid somewhere in the house and then flew frantically in the living room every evening around this time of the day, causing the cats to run and jump around trying to catch it. we caught this bat every time and delivered it outside, hoping it would never return again. but it kept coming back. now i am sort of giving up trying to catch it. even the cats are no longer paying attention to the bat and just give this "meh" face when they spotted it. old window #garage none none planet cataloging planet cataloging august , coyle's information phil agre and the gendered internet there is an article today in the washington post about the odd disappearance of a computer science professor named phil agre.  the article, entitled "he predicted the dark side of the internet years ago. why did no one listen?" reminded me of a post by agre in after a meeting of computer professionals for social responsibility. although it annoyed me at the time, a talk that i gave there triggered in him thoughts of gender issues;  as a women i was very much in the minority at the meeting,  but that was not the topic of my talk. but my talk also gave agre thoughts about the missing humanity on the web. i had a couple of primary concerns, perhaps not perfectly laid out, in my talk, "access, not just wires." i was concerned about what was driving the development of the internet and the lack of a service ethos regarding society. access at the time was talked about in terms of routers, modems, t- lines. there was no thought to organizing or preserving of online information. there was no concept of "equal access". there was no thought to how we would democratize the web such that you didn't need a degree in computer science to find what you needed. i was also very concerned about the commercialization of information. i was frustrated watching the hype as information was touted as the product of the information age. (this was before we learned that "you are the product, not the user" in this environment.) seen from the tattered clothes and barefoot world of libraries, the money thrown at the jumble of un-curated and unorganized "information" on the web was heartbreaking. i said: "it's clear to me that the information highway isn't much about information. it's about trying to find a new basis for our economy. i'm pretty sure i'm not going to like the way information is treated in that economy. we know what kind of information sells, and what doesn't. so i see our future as being a mix of highly expensive economic reports and cheap online versions of the national inquirer. not a pretty picture." - kcoyle in access, not just wires  little did i know how bad it would get. like many or most people, agre heard "libraries" and thought "female." but at least this caused him to think, earlier than many, about how our metaphors for the internet were inherently gendered. "discussing her speech with another cpsr activist ... later that evening, i suddenly connected several things that had been bothering me about the language and practice of the internet. the result was a partial answer to the difficult question, in what sense is the net "gendered"?" -  agre, tno, october this led agre to think about how we spoke then about the internet, which was mainly as an activity of "exploring." that metaphor is still alive with microsoft's internet explorer, but was also the message behind the main web browser software of the time, netscape navigator. he suddenly saw how "explore" was a highly gendered activity: "yet for many people, "exploring" is close to defining the experience of the net. it is clearly a gendered metaphor: it has historically been a male activity, and it comes down to us saturated with a long list of meanings related to things like colonial expansion, experiences of otherness, and scientific discovery. explorers often die, and often fail, and the ones that do neither are heroes and role models. this whole complex of meanings and feelings and strivings is going to appeal to those who have been acculturated into a particular male-marked system of meanings, and it is not going to offer a great deal of meaning to anyone who has not. the use of prestigious artifacts like computers is inevitably tied up with the construction of personal identity, and "exploration" tools offer a great deal more traction in this process to historically male cultural norms than to female ones." - agre, tno, october he decried the lack of social relationships on the internet, saying that although you know that other  people are there, you cannot see them.  "why does the space you "explore" in gopher or mosaic look empty even when it's full of other people?" - agre, tno, october none of us knew at the time that in the future some people would experience the internet entirely and exclusively as full of other people in the forms of facebook, twitter and all of the other sites that grew out of the embryos of bulletin board systems, the well, and aol. we feared that the future internet would  not have the even-handedness of libraries, but never anticipated that russian bots and qanon promoters would reign over what had once been a network for the exchange of scientific information. it hurts now to read through agre's post arguing for a more library-like online information system because it is pretty clear that we blew through that possibility even before the meeting and were already taking the first steps toward to where we are today. agre walked away from his position at ucla in and has not resurfaced, although there have been reports at times (albeit not recently) that he is okay. looking back, it should not surprise us that someone with so much hope for an online civil society should have become discouraged enough to leave it behind. agre was hoping for reference services and an internet populated with users with: "...the skills of composing clear texts, reading with an awareness of different possible interpretations, recognizing and resolving conflicts, asking for help without feeling powerless, organizing people to get things done, and embracing the diversity of the backgrounds and experiences of others." - agre, tno, october  oh, what world that would be! by karen coyle (noreply@blogger.com) at august , : pm august , tsll techscans (technical services law librarians) details about the shut down of lawarxiv lawarxiv was launched in to provide legal scholars with an open-access, non-profit platform for preserving their work. by the end of the first year, over articles had been submitted to the archive and there were plans for additional features to make the repository more robust and useful to the legal scholarly community. however, those plans never made it to fruition. earlier this year, it was announced that lawarxiv would no longer accept new submissions.  at the recent legal information preservation alliance (lipa) annual meeting, more details were shared about why the lawarxiv project was shutting down. at the heart of the matter were irreconcilable issues with the center for open science (cos), which hosts the lawarxiv platform as well as open-access platforms for a number of other areas of study. due to insufficient demand from their other partners, cos was unable to support the development of new platform features, including school-level branding and batch uploading, requested by the lawarxiv steering committee. the steering committee was given the option of financing the development of these features but that option was cost-prohibitive. further stressing the agreement was the fact that cos had also instituted a new annual hosting fee in january, . the steering committee was left questioning whether it was worth paying the annual hosting fee knowing that features crucial to the growth of lawarxiv were not slated for development.   these issues proved to be deal breakers for the project. after extensive research and discussion of various options, the lawarxiv steering committee ultimately decided to end the partnership with cos. the agreement among the member institutions was formally dissolved on june , . while lawarvix is no longer accepting new submissions, the , articles previously uploaded to the site are still available for the time being on cos’s general preprints platform. tsll tech scans blog by noreply@blogger.com (travis spence) at august , : pm planet cataloging is an automatically-generated aggregation of blogs related to cataloging and metadata designed and maintained by jennifer w. baxmeyer and kevin s. clarke. please feel free to email us if you think a blog should be added to or removed from this list. authors: if you would prefer your blog not be included here, we will be glad to remove it. please send an email to let us know. subscribe to planet cataloging! blog roll . : the dewey blog bibliographic wilderness blog of the ohio library council technical services division catalogablog cataloger . cataloging futures cataloging thoughts (stephen denney) celeripedean (jennifer eustis) commonplace.net (lukas koster) coyle's information first thus (james weinheimer) hectic pace international society for knowledge organization (isko) uk local weather (matthew beacom) lorcan dempsey's weblog metadata and more (maureen p. walsh) mashcat metadata matters (diane hillmann) metalibrarian oclc next open metadata registry blog organizing stuff outgoing problem cataloger quick t.s. (dodie gaudet) resource description & access (rda) (salman haider) tsll techscans (technical services law librarians) terry's worklog thingology (librarything's ideas blog) universal decimal classification various librarian-like stuff weibel lines work and expression z . .b (www.jenniferbax.net) catalogingrules (amber billey) mod librarian (tracy guza) last updated: august , : pm all times are utc. powered by: none none none none
fatal error: cannot declare class wp_block_template, because the name is already in use in /home/customer/www/acrl.ala.org/public_html/techconnect/wp-content/plugins/gutenberg/lib/full-site-editing/class-wp-block-template.php on line
jodischneider.com/blog reading, technology, stray thoughts blog about categories argumentative discussions books and reading computer science firefox future of publishing higher education information ecosystem information quality lab news intellectual freedom ios: ipad, iphone, etc. library and information science math old newspapers phd diary programming random thoughts reviews scholarly communication semantic web social semantic web social web uncategorized search paid graduate hourly research position at uiuc for spring december rd, by jodi jodi schneider’s information quality lab (http://infoqualitylab.org) seeks a graduate hourly student for a research project on bias in citation networks. biased citation benefits authors in the short-term by bolstering grants and papers, making them more easily accepted. however, it can have severe negative consequences for scientific inquiry. our goal is to find quantitative measures of network structure that can indicate the existence of citation bias.  this job starts january , . pay depending on experience (master’s students start at $ /hour). optionally, the student can also take a graduate independent study course (generally - credits is or info ). apply on handshake responsibilities will include: assist in the development of algorithms to simulate an unbiased network carry out statistical significance tests for candidate network structure measures attend weekly meetings assist with manuscript and grant preparation required skills proficiency in python or r demonstrated ability to systematically approach a simulation or modeling problem statistical knowledge, such as developed in a course on mathematical statistics and probability (e.g. stat statistics and probability i https://courses.illinois.edu/schedule/ /spring/stat/ ) preferred skills knowledge of stochastic processes experience with simulation knowledge of random variate generation and selection of input probability distribution knowledge of network analysis may have taken classes such as stat stochastic processes (https://courses.illinois.edu/schedule/ /spring/stat/ ) or ie advanced topics in stochastic processes & applications (https://courses.illinois.edu/schedule/ /fall/ie/ ) more information: https://ischool.illinois.edu/people/jodi-schneider http://infoqualitylab.org application deadline: monday december th. apply on handshake with the following application materials: resume transcript – such as free university of illinois academic history from banner self-service (https://apps.uillinois.edu, click “registration & records”, “student records and transcripts”, “view academic history”, choose “web academic history”) cover letter: just provide short answers to the following two questions: ) why are you interested in this particular project? ) what past experience do you have that is related to this project?  tags: citation bias, jobs, network analysis, statistical modeling posted in information quality lab news | comments ( ) avoiding long-haul air travel during the covid- pandemic october th, by jodi i would not recommend long-haul air travel at this time. an epidemiological study of a . hour flight from the middle east to ireland concluded that groups ( people), traveling from continents in four groups, who used separate airport lounges, were likely infected in flight. the flight had % occupancy ( passengers/ seats; crew) and took place in summer . (note: i am not an epidemiologist.) the study (published open access): murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . irish news sites including rte and the irish times also covered the paper. figure from “a large national outbreak of covid- linked to air travel, ireland, summer ” https://doi.org/ . / - .es. . . . caption in original “passenger seating diagram on flight, ireland, summer (n= passengers)” “numbers on the seats indicate the flight groups – .” the age of the flight cases ranged from to years with a median age of years. twelve of flight cases and almost three quarters ( / ) of the non-flight cases were symptomatic. after the flight, the earliest onset of symptoms occurred days after arrival, and the latest case in the entire outbreak occurred days after the flight. of symptomatic flight cases, symptoms reported included cough (n = ), coryza (n = ), fever (n = ) and sore throat (n = ), and six reported loss of taste or smell. no symptoms were reported for one flight case. a mask was worn during the flight by nine flight cases, not worn by one (a child), and unknown for three. murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . (notes to figure caption) “it is interesting that four of the flight cases were not seated next to any other positive case, had no contact in the transit lounge, wore face masks in-flight and would not be deemed close contacts under current guidance from the european centre for disease prevention and control (ecdc) [ ].” murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . “the source case is not known. the first two cases in group became symptomatic within h of the flight, and covid- was confirmed in three, including an asymptomatic case from this group in region a within days of the flight. thirteen secondary cases and one tertiary case were later linked to these cases. two cases from flight group were notified separately in region a with one subsequent secondary family case, followed by three further flight cases notified from region b in two separate family units (flight groups and ). these eight cases had commenced their journey from the same continent and had some social contact before the flight. the close family member of a group case seated next to the case had tested positive abroad weeks before, and negative after the flight. flight group was a household group of which three cases were notified in region c and one case in region d. these cases had no social or airport lounge link with groups or pre-flight and were not seated within two rows of them. their journey origin was from a different continent. a further case (flight group ) had started the journey from a third continent, had no social or lounge association with other cases and was eated in the same row as passengers from group . three household contacts and a visitor of flight group became confirmed cases. one affected contact travelled to region e, staying in shared accommodation with others; of these became cases (attack rate %) notified in regions a, b, c, d, e and f, with two cases of quaternary spread.” murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . “in-flight transmission is a plausible exposure for cases in group and group given seating arrangements and onset dates. one case could hypothetically have acquired the virus as a close household contact of a previous positive case, with confirmed case onset date less than two incubation periods before the flight, and symptom onset in the flight case was h after the flight. in-flight transmission was the only common exposure for four other cases (flight groups and ) with date of onset within four days of the flight in all but the possible tertiary case. this case from group developed symptoms nine days after the flight and so may have acquired the infection in-flight or possibly after the flight through transmission within the household.” murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . “genomic sequencing for cases travelling from three different continents strongly supports the epidemiological transmission hypothesis of a point source for this outbreak. the ability of genomics to resolve transmission events may increase as the virus evolves and accumulates greater diversity [ ].” murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . authors note that a large percentage of the flight passengers were infected: “we calculated high attack rates, ranging plausibly from . % to . % despite low flight occupancy and lack of passenger proximity on-board.” murphy nicola, boland máirín, bambury niamh, fitzgerald margaret, comerford liz, dever niamh, o’sullivan margaret b, petty-saphon naomi, kiernan regina, jensen mette, o’connor lois. a large national outbreak of covid- linked to air travel, ireland, summer . euro surveill. ; ( ):pii= . https://doi.org/ . / - .es. . . . among the reasons for the uncertainty of this range is that “ flight passengers could not be contacted and were consequently not tested.” (a twelfth passenger “declined testing”.) there is also some inherent uncertainty due to incubation period and possibility of “transmission within the household”, especially after the flight; authors note that “exposure possibilities for flight cases include in-flight, during overnight transfer/pre-flight or unknown acquisition before the flight.” beyond the people on the flight, cases spread to several social groups, across “six of the eight different health regions (regions a–h) throughout the republic of ireland”. flight groups and started their travel from one continent; flight group from another; flight group from a third continent. figure from “a large national outbreak of covid- linked to air travel, ireland, summer ” https://doi.org/ . / - .es. . . . caption in original: “diagram of chains of transmission, flight-related covid- cases, ireland, summer (n= )” tags: air travel, attack rate, covid- , covid , epidemiology, flights, flying, ireland, middle east, pandemic posted in random thoughts | comments ( ) paid undergraduate research position at uiuc for fall & spring august th, by jodi university of illinois undergraduates are encouraged to apply for a position in my lab. i particularly welcome applications from students in the new ischool bs/is degree or in the university-wide informatics minor. while i only have paid position open, i also supervise unpaid independent study projects. dr. jodi schneider and the information quality lab seek undergraduate research assistants for % remote work. past students have published research articles, presented posters, earned independent study credit, james scholar research credit, etc. one paid position in news analytics/data science for assessing the impact of media polarization on public health emergencies, funded by the cline center for advanced research in the social sciences. ( hrs/week at $ . /hour + possible independent study – % remote work). covid- news analytics: we seek to understand how public health emergencies are reported and to assess the polarization and politicization of the u.s. news coverage. you will be responsible for testing and improving search parameters, investigating contextual information such as media bias and media circulation, using text mining and data science, and close reading of sample texts. you will work closely with a student who has worked on the opioid crisis – see the past work following poster (try the link twice – you have to log in with an illinois netid): https://compass g.illinois.edu/webapps/discussionboard/do/message?action=list_messages&course_id=_ _ &nav=discussion_board&conf_id=_ _ &forum_id=_ _ &message_id=_ _ applications should be submitted here: https://forms.illinois.edu/sec/ deadline: pm central time sunday august , tags: covid , data science, health controversies, jobs, media polarization, news analytics, research experiences for undergraduates, undergraduate research posted in information quality lab news | comments ( ) #shutdownstem #strike blacklives #shutdownacademia june th, by jodi i greatly appreciated receiving messages from senior people about their participation in the june th #shutdownstem #strike blacklives #shutdownacademia. in that spirit, i am sharing my email bounce message for tomorrow, and the message i sent to my research lab. email bounce: i am not available by email today:  this june th is a day of action about understanding and addressing racism, and its impact on the academy, and on stem.  -jodi email to my research lab wednesday is a day of action about understanding and addressing racism, and its impact on the academy, and on stem. i strongly encourage you to use tomorrow for this purpose. specifically, i invite you to think about what undoing racism – moving towards antiracism – means, and what you can do. one single day, by itself, will not cure racism; but identifying what we can do on an ongoing basis, and taking those actions day after day – that can and will have an impact. and, if racism is vivid in your daily life, make #shutdownstem a day of rest. if tomorrow doesn’t suit, i encourage you to reserve a day over the course of the next week, to replace your everyday duties. what does taking this time actually mean? it means scheduling a dedicated block of time to learn more; rescheduling meetings; shutting down your email; reading books and articles and watching videos; and taking time to reflect on recent events and the stress that they cause every single person in our community. what am i doing personally? i’ve cancelled meetings tomorrow, and set an email bounce. i will spend part of the day to think more seriously about what real antiracist action looks like from my position, as a white female academic. this week i will also be using time to re-read white fragility, to finish dreamland burning (a ya novel about the tulsa race riot), and to investigate how to bring bystander training to the ischool. i will also be thinking about the relationship of racism to other forms of oppression – classism, sexism, homophobia, transphobia, xenophobia. if you are looking for readings of your own, i can point to a list curated by an anti-racism task force: https://idea.illinois.edu/education for basic information, #shutdownstem #strike blacklives #shutdownacademia website: https://www.shutdownstem.com physicists’ particles for justice: https://www.particlesforjustice.org -jodi tags: #shutdownacademia, #shutdownstem, #strike blacklives, email posted in random thoughts | comments ( ) qotd: storytelling in protest and politics march th, by jodi i recently read francesca polletta‘s book it was like a fever: storytelling in protest and politics ( , university of chicago press). i recommend it! it will appeal to researchers interested in topics such as narrative, strategic communication, (narrative) argumentation, or epistemology (here, of narrative). parts may also interest activists. the book’s case studies are drawn from the student nonviolent coordinating committee (sncc) (chapters & ); online deliberation about the / memorial (listening to the city, summer ) (chapter ); women’s stories in law (including, powerfully, battered women who had killed their abusers, and the challenges in making their stories understandable) (chapter ); references to martin luther king by african american congressmen (in the congressional record) and by “leading back political figures who were not serving as elected or appointed officials” (chapter ). several are extended from work polletta previously published from through (see page xiii for citations). the conclusion—”conclusion: folk wisdom and scholarly tales” (pages - )—takes up several topics, starting with canonicity, interpretability, ambivalence. i especially plan to go back to the last two sections: “scholars telling stories” (pages - )—about narrative and storytelling in analysts’ telling of events—and “towards a sociology of discursive forms” (pages - )—about investigating the beliefs and conventions of narrative and its institutional conventions (and relating those to conventions of other “discursive forms” such as interviews). these set forward a research agenda likely useful to other scholars interested in digging in further. these are foreshadowed a bit in the introduction (“why stories matter”) which, among other things, sets out the goal of developing “a sociology of storytelling”. a few quotes i noted—may give you the flavor of the book: page : “but telling stories also carries risks. people with unfamiliar experiences have found those experiences assimilated to canonical plot lines and misheard as a result. conventional expectations about how stories work, when they are true, and when they are appropriate have also operated to diminish the impact of otherwise potent political stories. for the abused women whom juries disbelieved because their stories had changed in small details since their first traumatized [p ] call to police, storytelling has not been especially effective. nor was it effective for the citizen forum participants who did not say what it was like to search fruitlessly for affordable housing because discussions of housing were seen as the wrong place in which to tell stories.” pages - : “so which is it? is narrative fundamentally subversive or hegemonic? both. as a rhetorical form, narrative is equipped to puncture reigning verities and to uphold them. at times, it seems as if most of the stories in circulation are subtly or not so subtly defying authorities; at others as if the most effective storytelling is done by authorities. to make it more complicated, sometimes authorities unintentionally undercut their own authority when they tell stories. and even more paradoxically, undercutting their authority by way of a titillating but politically inconsequential story may actually strengthen it. dissenters, for their part, may find their stories misread in ways that support the very institutions that are challenging….”for those interested in the relations between storytelling, protest, and politics, this all suggests two analytical tasks. one is to identify the features of narrative that allow it to [p ] achieve certain rhetorical effects. the other is to identify the social conditions in which those rhetorical effects are likely to be politically consequential. the surprise is that scholars of political processes have devoted so little attention to either task.” pages - – “so institutional conventions of storytelling influence what people can do strategically with stories. in the previously pages, i have described the narrative conventions that operate in legal adjudication, media reporting, television talk shows, congressional debate, and public deliberation. sociolinguists have documented such conventions in other settings: in medical intake interviews, for example, parole hearings, and jury deliberations. one could certainly generate a catalogue of the institutional conventions of storytelling. to some extent, those conventions reflect the peculiarities of the institution as it has developed historically. they also serve practical functions; some explicit, others less so. i have argued that the lines institutions draw between suitable and unsuitable occasions for storytelling or for certain kinds of stories serve to legitimate the institution.” [specific examples follow] ….”as these examples suggest, while institutions have different conventions of storytelling, storytelling does some of the same work in many institutions. it does so because of broadly shared assumptions about narrative’s epistemological status. stories are generally thought to be more affecting by less authoritative than analysis, in part because narrative is associated with women rather than men, the private sphere rather than the public one, and custom rather than law. of course, conventions of storytelling and the symbolic associations behind them are neither unitary nor fixed. nor are they likely to be uniformly advantageous for those in power and disadvantageous for those without it. narrative’s alignment [ ] along the oppositions i noted is complex. for example, as i showed in chapter , americans’ skepticism of expert authority gives those telling stories clout. in other words, we may contrast science with folklore (with science seen as much more credible), but we may also contrast it with common sense (with science seen as less credible). contrary to the lamentation of some media critics and activists, when disadvantaged groups have told personal stories to the press and on television talk shows, they have been able to draw attention not only to their own victimization but to the social forces responsible for it.“ tags: congressional record, francesca polletta, listening to the city, martin luther king, narrative, qotd, sncc, storytelling, strategic communication, student nonviolent coordinating committee posted in argumentative discussions, books and reading | comments ( ) knowledge graphs: an aggregation of definitions march rd, by jodi i am not aware of a consensus definition of knowledge graph. i’ve been discussing this for awhile with liliana giusti serra, and the topic came up again with my fellow organizers of the knowledge graph session at us ts as we prepare for a panel. i’ve proposed the following main features: rdf-compatible, has a defined schema (usually an owl ontology) items are linked internally may be a private enterprise dataset (e.g. not necessarily openly available for external linking) or publicly available covers one or more domains below are some quotes. i’d be curious to hear of other definitions, especially if you think there’s a consensus definition i’m just not aware of. “a knowledge graph consists of a set of interconnected typed entities and their attributes.” jose manuel gomez-perez, jeff z. pan, guido vetere and honghan wu. “enterprise knowledge graph: an introduction.”  in exploiting linked data and knowledge graphs in large organisations. springer. part of the whole book: http://link.springer.com/ . / - - - - “a knowledge graph is a structured dataset that is compatible with the rdf data model and has an (owl) ontology as its schema. a knowledge graph is not necessarily linked to external knowledge graphs; however, entities in the knowledge graph usually have type information, defined in its ontology, which is useful for providing contextual information about such entities. knowledge graphs are expected to be reliable, of high quality, of high accessibility and providing end user oriented information services.” boris villazon-terrazas, nuria garcia-santa, yuan ren, alessandro faraotti, honghan wu, yuting zhao, guido vetere and jeff z. pan .  “knowledge graphs: foundations”. in exploiting linked data and knowledge graphs in large organisations.  springer. part of the whole book: http://link.springer.com/ . / - - - - “the term knowledge graph was coined by google in , referring to their use of semantic knowledge in web search (“things, not strings”), and is recently also used to refer to semantic web knowledge bases such as dbpedia or yago. from a broader perspective, any graph-based representation of some knowledge could be considered a knowledge graph (this would include any kind of rdf dataset, as well as description logic ontologies). however, there is no common definition about what a knowledge graph is and what it is not. instead of attempting a formal definition of what a knowledge graph is, we restrict ourselves to a minimum set of characteristics of knowledge graphs, which we use to tell knowledge graphs from other collections of knowledge which we would not consider as knowledge graphs. a knowledge graph mainly describes real world entities and their interrelations, organized in a graph. defines possible classes and relations of entities in a schema. allows for potentially interrelating arbitrary entities with each other. covers various topical domains.” paulheim, h. ( ). knowledge graph refinement: a survey of approaches and evaluation methods. semantic web,  ( ), - . http://www.semantic-web-journal.net/system/files/swj .pdf “isi’s center on knowledge graphs research group combines artificial intelligence, the semantic web, and database integration techniques to solve complex information integration problems. we leverage general research techniques across information-intensive disciplines, including medical informatics, geospatial data integration and the social web.” http://usc-isi-i .github.io/home/ just as i was “finalizing” my list to send to colleagues, i found a poster all about definitions: ehrlinger, l., & wöß, w. ( ). towards a definition of knowledge graphs. semantics (posters, demos, success),  . http://ceur-ws.org/vol- /paper .pdf its table : selected definitions of knowledge graph has the following definitions (for citations see that paper) “a knowledge graph (i) mainly describes real world entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other and (iv) covers various topical domains.” paulheim [ ] “knowledge graphs are large networks of entities, their semantic types, properties, and relationships between entities.” journal of web semantics [ ] “knowledge graphs could be envisaged as a network of all kind things which are relevant to a specific domain or to an organization. they are not limited to abstract concepts and relations but can also contain instances of things like documents and datasets.” semantic web company [ ] “we define a knowledge graph as an rdf graph. an rdf graph consists of a set of rdf triples where each rdf triple (s, p, o) is an ordered set of the following rdf terms: a subjects∈u∪b,apredicatep∈u,andanobjectu∪b∪l. anrdftermiseithera uri u ∈ u, a blank node b ∈ b, or a literal l ∈ l.” färber et al. [ ] “[…] systems exist, […], which use a variety of techniques to extract new knowledge, in the form of facts, from the web. these facts are interrelated, and hence, recently this extracted knowledge has been referred to as a knowledge graph.” pujara et al. [ ] “a knowledge graph is a graph that models semantic knowledge, where each node is a real-world concept, and each edge represents a relationship between two concepts” fang, y., kuan, k., lin, j., tan, c., & chandrasekhar, v. ( ). object detection meets knowledge graphs. https://oar.a-star.edu.sg/jspui/handle/ / “things not strings” – google https://googleblog.blogspot.com/ / /introducing-knowledge-graph-things-not.html tags: knowledge graph, knowledge representation, quotations posted in information ecosystem, semantic web | comments ( ) qotd: doing more requires thinking less december st, by jodi by the aid of symbolism, we can make transitions in reasoning almost mechanically by the eye which would otherwise call into play the higher faculties of the brain. …civilization advances by extending the number of important operations that we can perform without thinking about them. operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments. one very important property for symbolism to possess is that it should be concise, so as to be visible at one glance of the eye and be rapidly written. – whitehead, a.n. ( ). an introduction to mathematics, chapter , “the symbolism of mathematics” (page in this version) ht to santiago nuñez-corrales (illinois page for santiago nuñez-corrales, linkedin for santiago núñez-corrales) who used part of this quote in a conceptual foundations group talk, nov . from my point of view, this is why memorizing multiplication tables is not now irrelevant; why new words for concepts are important; and underlies a lot of scientific advancement. tags: cavalry, modes of thought, qotd, symbolism posted in information ecosystem, random thoughts | comments ( ) qotd: sally jackson on how disagreement makes arguments more explicit june th, by jodi sally jackson explicates the notion of the “disagreement space” in a new topoi article: “a position that remains in doubt remains in need of defense”   “the most important theoretical consequence of seeing argumentation as a system for management of disagreement is a reversal of perspective on what arguments accomplish. are arguments the means by which conclusions are built up from established premises? or are they the means by which participants drill down from disagreements to locate how it is that they and others have arrived at incompatible positions? a view of argumentation as a process of drilling down from disagreements suggests that arguers themselves do not simply point to the reasons they hold for a particular standpoint, but sometimes discover where their own beliefs come from, under questioning by others who do not share their beliefs. a logical analysis of another’s argument nearly always involves first making the argument more explicit, attributing more to the author than was actually said. this is a familiar enough problem for analysts; my point is that it is also a pervasive problem for participants, who may feel intuitively that something is seriously wrong in what someone else has said but need a way to pinpoint exactly what. getting beliefs externalized is not a precondition for argument, but one of its possible outcomes.” from sally jackson’s reason-giving and the natural normativity of argumentation. the original treatment of disagreement space is cited to a book chapter revising an issa paper , somewhat harder to get one’s hands on. p , sally jackson. reason-giving and the natural normativity of argumentation. topoi. online first. http://doi.org/ . /s - - - [↩] p , sally jackson. reason-giving and the natural normativity of argumentation. topoi. online first. http://doi.org/ . /s - - - [↩] sally jackson. reason-giving and the natural normativity of argumentation. topoi. online first. http://doi.org/ . /s - - - [↩] jackson s ( ) “virtual standpoints” and the pragmatics of conversational argument. in: van eemeren fh, grootendorst r, blair ja, willard ca (eds) argument illuminated. international centre for the study of argumentation, amsterdam, pp. – [↩] tags: argumentation, argumentation norms, disagreement space posted in argumentative discussions | comments ( ) qotd: working out scientific insights on paper, lavoisier case study july th, by jodi …language does do much of our thinking for us, even in the sciences, and rather than being an unfortunate contamination, its influence has been productive historically, helping individual thinkers generate concepts and theories that can then be put to the test. the case made here for the constitutive power of figures [of speech] per se supports the general point made by f.l. holmes in a lecture addressed to the history of science society in . a distinguished historian of medicine and chemistry, holmes based his study of antoine lavoisier on the french chemist’s laboratory notebooks. he later examined drafts of lavoisier’s published papers and discovered that lavoisier wrote many versions of his papers and in the course of careful revisions gradually worked out the positions he eventually made public (holmes, ). holmes, whose goal as a historian is to reconstruct the careful pathways and fine structure of scientific insights, concluded from his study of lavoisier’s drafts we cannot always tell whether a thought that led him to modify a passage, recast an argument, or develop an alternative interpretation occurred while he was still engaged in writing what he subsequently altered, or immediately afterward, or after some interval during which he occupied himself with something else; but the timing is, i believe, less significant than the fact that the new developments were consequences of the effort to express ideas and marshall supporting information on paper ( ). – page xi of rhetorical figures in science by jeanne fahnestock, oxford university press, . she is quoting frederich l. holmes. . scientific writing and scientific discovery. isis : - . doi: . / as moore summarizes, lavoisier wrote at least six drafts of the paper over a period of at least six months. however, his theory of respiration did not appear until the fifth draft. clearly, lavoisier’s writing helped him refine and understand his ideas. moore, randy. language—a force that shapes science. journal of college science teaching . ( ): . http://www.jstor.org/stable/ (which i quoted in a review i wrote recently) fahnestock adds: “…holmes’s general point [is that] there are subtle interactions ‘between writing, thought, and operations in creative scientific activity’ ( ).” tags: lavoisier, revision, rhetoric of science, scientific communication, scientific writing posted in future of publishing, information ecosystem, scholarly communication | comments ( ) david liebovitz: achieving care transformation by infusing electronic health records with wisdom may st, by jodi today i am at the health data analytics summit. the title of the keynote talk is achieving care transformation by infusing electronic health records with wisdom. it’s a delight to hear from a medical informaticist: david m. liebovitz (publications in google scholar), md, facp, chief medical information officer, the university of chicago. he graduated from university of illinois in electrical engineering, making this a timely talk as the engineering-focused carle illinois college of medicine gets going. david liebovitz started with a discussion of the data problems — problem lists, medication lists, family history, rules, results, notes — which will be familiar to anyone using ehrs or working with ehr data. he draws attention also to the human problems — both in terms of provider “readiness” (e.g. their vision for population-level health) as well as about “current expectations”. (an example of such an expectation is a “main clinician satisfier” he closed with: u chicago is about to turn on outbound faxing from the ehr!) he mentioned also the importance of resilience. he mentioned customizing systems as a risk when the vendor makes upstream changes (this is not unique to healthcare but a threat to innovation and experimentation with information systems in other industries.) still, in managing the ehr, there is continual optimization, scored based on a number of factors. he mentioned: safety quality/patient experience regulatory/legal financial usability/productivity availability of alternative solutions as well as weighting for old requests. he emphasized the complexity of healthcare in several ways: “nobody knew that healthcare could be so complicated.” – potus showing the medicare readmissions adjustment factors pharmacy pricing, an image (showing kickbacks among other things) from “prices that are too high”, chapter , the healthcare imperative: lowering costs and improving outcomes: workshop series summary ( )  national academies press doi: . / an image from “prices that are too high”, chapter , the healthcare imperative: lowering costs and improving outcomes: workshop series summary ( ) icosystem’s diagram of the complexity of the healthcare system icosystem – complexity of the healthcare system another complexity is the modest impact of medical care compared to other factors such as the impact of socioeconomic and political context on equity in health and well-being (see the who image below). for instance, there is a large impact of health behaviors, which “happen in larger social contexts.” (see the relative contribution of multiple determinants to health, august , , health policy briefs) solar o, irwin a. a conceptual framework for action on the social determinants of health. social determinants of health discussion paper (policy and practice). given this complexity, david liebovitz stresses that we need to start with the right model, “simultaneously improving population health, improving the patient experience of care, and reducing per capita cost”. (see stiefel m, nolan k. a guide to measuring the triple aim: population health, experience of care, and per capita cost. ihi innovation series white paper. cambridge, massachusetts: institute for healthcare improvement; ). table from stiefel m, nolan k. a guide to measuring the triple aim: population health, experience of care, and per capita cost. ihi innovation series white paper. cambridge, massachusetts: institute for healthcare improvement; . given the modest impact of medical care, and of data, he suggests that we should choose the right outcomes. david liebovitz says that “not enough attention has been paid to usability”; i completely agree and suggest that information scientists, human factors engineeers, and cognitive ergonomists help mainstream medical informaticists fill this gap. he put up jakob nielsen’s  usability heuristics for user interface design a vivid example is whether a patient’s resuscitation preferences are shown (which seems to depend on the particular ehr screen): the system doesn’t highlight where we are in the system. for providers, he says user control and freedom are very important. he suggests that there are only a few key tasks. a provider should be able to do any of these things wherever they are in the chart: put a note order something send a message similarly, ehr should support recognition (“how do i admit a patient again?”) rather than requiring recall. meanwhile, on the decision support side he highlights the (well-known) problems around interruptions by saying that speed is everything and changing direction is much easier than stopping. here he draws on some of his own work, describing what he calls a “diagnostic process aware workflow” david liebovitz. next steps for electronic health records to improve the diagnostic process. diagnosis ( ) - . doi: . /dx- - can we predict x better? yes, he says (for instance pointing to table of “can machine-learning improve cardiovascular risk prediction using routine clinical data?” and its machine learning analysis of over , patients, based on variables chosen from previous guidelines and expert-informed selection–generating further support for aspects such as aloneness, access to resources, socio-economic status). but what’s really needed, he says, is to: predict the best next medical step, iteratively predict the best next lifestyle step, iteratively (and what to do about genes and epigenetic measures?) he shows an image of “all of our planes in the air” from flightaware, drawing the analogy that we want to work on “optimal patient trajectories” — predicting what are the “turbulent events” to avoid”. this is not without challenges. he points to three: data privacy (he suggests google deepmind and healthcare in an age of algorithms. powles, j. & hodson, h. health technol. ( ). doi: . /s - - - two sorts of mismatches between the current situation and where we want to go: for instance the source of data being from finance certain basic current clinician needs  (e.g. that a main clinician satisfier is that uchicago is soon to turn on outbound faxing from their ehr — and that an ongoing source of dissatisfaction: managing volume of inbound faxes.) he closes suggesting that we: finish the basics address key slices of the spectrum descriptive/prescriptive begin the prescriptive journey: impact one trajectory at a time. tags: data analytics, electronic health records, healthcare systems, medical informatics posted in information ecosystem | comments ( ) « older entries recent posts paid graduate hourly research position at uiuc for spring avoiding long-haul air travel during the covid- pandemic paid undergraduate research position at uiuc for fall & spring #shutdownstem #strike blacklives #shutdownacademia qotd: storytelling in protest and politics monthly december october august june march meta log in valid xhtml xfn wordpress wordpress powers jodischneider.com/blog. layers theme designed by jai pandya. the digital librarian http://digitallibrarian.org information. organization. access. mon, jun : : + en-us hourly https://wordpress.org/?v= . . libraries and the state of the internet http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond mon, jun : : + http://digitallibrarian.org/?p= libraries and the state of the internet read more &# ;

]]>
mary meeker presented her internet trends report earlier this month. if you want a better understanding of how tech and the tech industry is evolving, you should watch her talk and read her slides.

this year&# ;s talk was fairly time constrained, and she did not go into as much detail as she has in years past. that being said, there is still an enormous amount of value in the data she presents and the trends she identifies via that data.

some interesting takeaways:

  • the growth in total number of internet users worldwide is slowing (the year-to-year growth rate is flat; overall growth is around % new years per year)
  • however, growth in india is still accelerating, and india is now the # global user market (behind china; usa is rd)
  • similarly, there is a slowdown in the growth of the number of smartphone users and number of smartphones being shipped worldwide (still growing, but at a slower rate)
  • android continues to demonstrate growth in marketshare; android devices are continuing to be less costly by a significant margin than apple devices.
  • overall, there are opportunities for businesses that innovate / increase efficiency / lower prices / create jobs
  • advertising continues to demonstrate strong growth; advertising efficacy still has a ways to go (internet advertising is effective and can be even more so)
  • internet as distribution channel continues to grow in use and importance
  •  brand recognition is increasingly important
  • visual communication channel usage is increasing &# ; generation z relies more on communicating with images than with text
  • messaging is becoming a core communication channel for business interactions in addition to social interactions
  • voice on mobile rapidly rising as important user interface &# ; lots of activity around this
  • data as platform &# ; important!

so, what kind of take-aways might be most useful to consider in the library context? some top-of-head thoughts:

  • in the larger context of the internet, libraries need to be more aggressive in marketing their brand and brand value. we are, by nature, fairly passive, especially compared to our commercial competition, and a failure to better leverage the opportunity for brand exposure leaves the door open to commercial competitors.
  • integration of library services and content through messaging channels will become more important, especially with younger users. (integration may actually be too weak a term; understanding how to use messaging inherently within the digital lifestyles of our users is critical)
  • voice &# ; are any libraries doing anything with voice? integration with amazon&# ;s alexa voice search? how do we fit into the voice as platform paradigm?

one parting thought, that i&# ;ll try to tease out in a follow-up post: libraries need to look very seriously at the importance of personalized, customized curation of collections for users, something that might actually be antithetical to the way we currently approach collection development. think apple music, but for books, articles, and other content provided by libraries. it feels like we are doing this in slices and pieces, but that we have not yet established a unifying platform that integrates with the larger internet ecosystem.

]]>
http://digitallibrarian.org/?feed=rss &# ;p=
meaningful web metrics http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond sun, jan : : + http://digitallibrarian.org/?p= meaningful web metrics read more &# ;

]]>
this article from wired magazine is a must-read if you are interested in more impactful metrics for your library&# ;s web site. at mpoe, we are scaling up our need for in-house web product expertise, but regardless of how much we invest in terms of staffing, it is likely that the amount of requested web support will always exceed the amount of resourcing we have for that support. leveraging meaningful impact metrics can help us understand the value we get from the investment we make in our web presence, and more importantly help us define what types of impact we want to achieve through that investment. this is no easy feat, but it is good to see that others in the information ecosystem are looking at the same challenges.

]]>
http://digitallibrarian.org/?feed=rss &# ;p=
site migrated http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond mon, oct : : + http://digitallibrarian.org/?p= site migrated read more &# ;

]]>
just a quick note &# ; digitallibrarian.org has been migrated to a new server. you may see a few quirks here and there, but things should be mostly in good shape. if you notice anything major, send me a challah. really. a nice bread. or just an email. your choice. 🙂

]]>
http://digitallibrarian.org/?feed=rss &# ;p=
the new ipad http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #comments sun, mar : : + http://digitallibrarian.org/?p= the new ipad read more &# ;

]]>
i decided that it was time to upgrade my original ipad, so i pre-ordered a new ipad, which arrived this past friday. after a few days, here are my initial thoughts / observations:

  • compared to the original ipad, the new ipad is a huge improvement. much zipper, feels lighter (compared to the original), and of course the display is fantastic.
  • i&# ;ve just briefly tried the dictation feature, and though i haven&# ;t used it extensively yet, the accuracy seems pretty darned good. i wonder if a future update will support siri?
  • the beauty of the display cannot be understated &# ; crisp, clear (especially for someone with aging eyes)
  • i purchased a -gb model with lte, but i have not tried the cell network yet. i did see g show up, so i&# ;m hoping that tucson indeed has the newer network.
  • not really new, but going from the original ipad to the new ipad, i really like the smart cover approach. ditto with the form factor.
  • again, not specific to the new model, the ability to access my music, videos, and apps via icloud means that i can utilize the storage on the ipad more effectively.
  • all-in-all, i can see myself using the new ipad consistently for a variety of tasks, not just for consuming information. point-in-fact, this post was written with the new ipad.

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    rd sits meeting &# ; geneva http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond wed, aug : : + http://digitallibrarian.org/?p= rd sits meeting &# ; geneva read more &# ;

    ]]>
    back in june i attend the rd sits (scholarly infrastructure technical summit) meeting, held in conjunction with the oai workshop and sponsored by jisc and the digital library federation. this meeting, held in lovely geneva, switzerland, brought together library technologists and technology leaders from north america, europe, australia, and asia for the purpose of exploring common technology and technology-related issues that crossed our geographic boundaries.

    this is the first sits meeting that i attended &# ; prior to this meeting, there were two other sits meetings (one in london and one in california). as this sits meeting was attached to the oai conference, it brought together a group of stakeholders who&# ;s roles in their organizations spanned from technology implementors to technology strategists and decision makers. from having chatted with some of the folks who had attended previous sits meetings, the attendees at those meetings tended to weigh heavily on the technology implementer / developer side, while this particular instance of sits had a broader range of discussion that, while centered on technology, also incorporated much of the context to which technology was being applied. for me, that actually made this a more intriguing and productive discussion, as i think that while there are certainly a great variety of strictly technical issues with which we grapple, what often gets lost when talking semantic web, linked data, digital preservation, etc. is the context and focus of the purpose of deploying said technology. so, with that particular piece of context, i&# ;ll describe some of the conversation that occurred at this particular sits event.

    due to the schedule of oai , this sits meeting was held in two parts &# ; the afternoon of june, and the morning of june. for the first session, the group met in one of the lecture rooms at the conference venue, and this worked out quite nicely. sits uses an open agenda / open meeting format, which allows the attendees to basically nominate and elect the topics of discussion for the meeting. after initial introductions, we began proposing topics. i tried to capture as best i could all of the topics that were proposed, though i might have missed one or two:

    * stable links for linked data vs. stable bitstreams for preservation
    * authority hubs / clustered ids / researcher ids / orcid in dspace
    * effective synchronization of digital resources
    * consistency and usage of usage data
    * digital preservation architecture &# ; integration of tape-based storage and other storage anvironments (external to the library)
    * integration between repositories and media delivery (i.e. streaming) &# ; particularly to access control enforcement
    * nano publications and object granularity
    * pairing storage with different types of applications
    * linking research data to scholarly publications to faculty assessment
    * well-behaved document
    * research impacts and outputs
    * linked open data: from vision to deployment
    * relationship between open linked data and open research data
    * name disambiguation

    following process, we took the above brainstormed list and proceeded to vote on which topic to begin discussion. the first topic chosen was researcher identities, which began with discussion around orcid, a project that currently has reasonable mindshare behind it. while there are a lot of backers of orcid, it is not clear whether the approach of a singular researcher id is a feasible approach, though i believe we&# ;ll discover the answer based on the success (or not) of the project. in general, i think that most of the attendees will be paying attention to orcid, but that also a wait and see approach is likely as there are many, many issues around researcher ids that still need to be worked through.

    the next topic was the assessment of research impacts and outputs. this particular topic was not particularly technically focused, but did bring about some interesting discussion about the impact of assessment activities, both positive and negative.

    the next topic, linking research data to scholarly publications to faculty assessment, was a natural progression from the previous topic, and much of the discussion revolved around how to support such relationships. i must admit that while i think this topic is important, i didn&# ;t feel that the discussion really resolved any of the potential issues with supporting researchers in linking data to publications (and then capturing this data for assessment purposes). what is clear is that the concept of publishing data, especially open data, is one that is not necessarily as straight-forward as one would hope when you get into the details, such as where to publish data, how to credit such publication, how is the data maintained, etc. there is a lot of work to be done here.

    next to be discussed was the preservation of data and software. it was brought up that the sustainability and preservation of data, especially open data, was somewhat analogous to the sustainability and preservation of software, in that both required a certain number of active tasks in order to ensure that both data and software were continually usable. it is also clear that much data requires the proper software in order to be usable, and therefore the issues of software and data sustainability and preservation are in my senses interwoven.

    the group then moved to a brief discussion of the harvesting and use of usage data. efforts such as counter and popirus were mentioned. the ability to track data in a way that balances anonymity and privacy vs. added value back to the user was discussed &# ; the fact that usage data can be leveraged to provide better services back to users was a key consideration.

    the next discussion topic was influenced by the oai workshop. the issue of the synchronisation of resources was discussed, and during oai , there was a breakout session that looked at the future of oai-pmh, both in terms of .x sustainability as well as work that might end up with the result of oai-pmh . . interestingly, there was some discussion of even the need for data synchronization with the advent of linked data; i can see why this would come up, but i personally believe that linked data isn&# ;t at the point where other methods for ensuring synchronized data aren&# ;t necessary (nor may it ever be).

    speaking of linked data, the concept arose in many of the sits discussions, though the group did not officially address it until late in the agenda. i must admit that i&# ;ve yet to drink the linked data lemonade, in the sense that i really don&# ;t see it being the silver bullet that many of its proponents make it out to be, but i do see it as one approach for enabling extended use of data and resources. in the discussion, one of the challenges of the linked data approach that was discussed was the need to map between ontologies.

    at this point, it was getting a bit late into the meeting, but we did talk about two more topics: one was very pragmatic, while the other was a bit more future-thinking (though there might be some disagreement on that). the first was a discussion about how organizationally digital preservation architectures were being supported &# ; were they being supported by central it, by the library it, or otherwise? it seemed that (not surprisingly) a lot depended upon the specific organization, and that perhaps more coordination could be undertaken through efforts such as pasig. the second discussion was on the topic of &# ;nano-publications&# ;, which the group defined as &# ;things that simply tell you what is being asserted (e.g. europe is a continent)&# ;. i must admit i got a bit lost about the importance and purpose of nano-publications, but again, it was close to the end of the meeting.

    btw, as i&# ;m finishing this an email just came through with the official notes from the sits meeting, which can be accessed at http://eprints.ecs.soton.ac.uk/ /

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    david lewis&# ; presentation on collections futures http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #comments wed, mar : : + http://digitallibrarian.org/?p= david lewis&# ; presentation on collections futures read more &# ;

    ]]>
    peter murray (aka the disruptive library technology jester) has provided an audio-overlay of david lewis&# ; slideshare of his plenary at the last june&# ;s rlg annual partners meeting. if you are at all interested in understanding the future of academic libraries, you should take an hour of your time and listen to this presentation. of particular note, because david says it almost in passing, is that academic libraries are moving away from being collectors of information to being provisioners of information &# ; the difference being that instead of purchasing everything that might be used, academic libraries instead are moving to ensuring that there is a path for provisioning access to materials that actually requested for use by their users. again, well worth an hour of your time.

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    librarians are *the* search experts&# ; http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond thu, aug : : + http://digitallibrarian.org/?p= &# ;so i wonder how many librarians know all of the tips and tricks for using google that are mentioned here?

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    what do we want from discovery? maybe it&# ;s to save the time of the user&# ;. http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #comments wed, aug : : + http://digitallibrarian.org/?p= what do we want from discovery? maybe it&# ;s to save the time of the user&# ;. read more &# ;

    ]]>
    just a quick thought on discovery tools &# ; the major newish discovery services being vended to libraries (worldcat local, summon, ebsco discovery service, etc.) all have their strengths, their complexity, their middle-of-the-road politician trying to be everything to everybody features. one question i have asked and not yet had a good answer to is &# ;how does your tool save the time of the user?&# ;. for me, that&# ;s the most important feature of any discovery tool.

    show me data or study results that prove your tool saves the time of the user as compared to other vended tools (and google and google scholar), and you have a clear advantage, at least in what i am considering when choosing to implement a discovery tool.

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    putting a library in starbucks http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #respond thu, aug : : + http://digitallibrarian.org/?p= putting a library in starbucks read more &# ;

    ]]>
    it is not uncommon to find a coffee shop in a library these days. turn that concept around, though &# ; would you expect a library inside a starbucks? or maybe that&# ;s the wrong question &# ; how would you react to having a library inside a starbucks? well, that concept shuffling its way towards reality, as starbucks is now experimenting with offering premium (i.e. non-free) content to users while they are on the free wireless that starbucks provides. in fact, starbucks actually has a collection development policy for their content &# ; they are providing content in the following areas, which they call channels: news, entertainment, wellness, business &# ; careers and my neighborhood. they even call their offerings &# ;curated content&# ;.

    obviously, this isn&# ;t the equivalent of putting the full contents of a library into a coffee shop, but it is worth our time to pay attention to how this new service approach from starbucks evolves. starbucks isn&# ;t giving away content for free just to get customers in the door; they are looking at how they might monetize this service through upsell techniques. the business models and agreements are going to have impact on how libraries do business, and we need to pay attention to how starbucks brokers agreements with content providers. eric hellman&# ;s current favorite term, monopsony, comes to mind here &# ; though in reality starbucks isn&# ;t buying anything, as no money is actually changing hands, at least to start. content providers are happy to allow starbucks to provide limited access (i.e. limited by geographic location / network access) to content for free in order to promote their content and provide a discovery to delivery path that will allow users to extend their use of the content for a price.

    this begs the question &# ; should libraries look at upsell opportunities, especially if it means we can reduce our licensing costs? at the very least, the idea is worth exploring.

    source: yahoo news

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    week of ipad http://digitallibrarian.org/?p= http://digitallibrarian.org/?p= #comments wed, apr : : + http://digitallibrarian.org/?p= week of ipad read more &# ;

    ]]>
    it has been a little over a week since my ipad was delivered, and in that time i have had the opportunity to try it out at home, at work, and on the road. in fact, i&# ;m currently typing this entry on it from the hotel restaurant at the cni spring task force meeting. i feel that i have used it enough now to provide some of my insights and thoughts about the ipad, how i am using it, and what i think of it.

    so, how best to describe the ipad? fun. convenient. fun again. the ipad is more than the sum of its parts; much like the iphone, it provides an overall experience, one that is enjoyable and yes, efficient. browsing is great fun; i have only run into one site where because of the lack of flash support was completely inaccessible (a local restaurant site). a number of sites that i regularly peruse have some flash aspect that is not available via the ipad, but typically this isn&# ;t a big loss. for example, if there is an engadget article that contains video, i won&# ;t get the video. however, the ny times, espn, and other major sites are already supporting html embedded video, and i expect to see a strong push towards html and away from flash. in the grand scheme of things, most of the sites i browse are text and image based, and have no issues.

    likewise for email and calendaring &# ; both work like a charm. email on the ipad is easy, fun, and much better than on the iphone. the keyboard, when in landscape mode, is actually much better than i expected, and very suitable for email replies (not to mention blog posts). i&# ;d go as far to say that the usability of the onscreen keyboard (when the ipad is in landscape mode) is as good or better than a typical net book keyboard. also, an unintended bonus is that typing on the keyboard is pretty much silent; this is somewhat noticeable during conference sessions where a dozen or so attendees are typing their notes and the clack of their keyboards starts to add up.

    so, how am i using my ipad? well, on this trip, i have used it to read (one novel and a bunch of work-related articles), do email, listen to music, watch videos, stream some netflix, browse the web, draft a policy document for my place of employment, diagram a repository architecture, and take notes during conference sessions. could i do all of this on a laptop? sure. could i do all of this on a laptop without plugging in at any point in the day? possibly, with the right laptop or net book. but here&# ;s the thing &# ; at the conference, instead of lugging my laptop bag around with me, my ipad replaced the laptop, my notepad, and everything else i would have dragged around in my bag. i literally only took my ipad, which is actually smaller than a standard paper notebook, and honestly i didn&# ;t miss a beat. quickly jot down a note? easy. sketch out an idea? ditto. it&# ;s all just right there, all the functionality, in a so-much-more convenient form factor.

    is the ipad perfect? by no means &# ; the desktop interface is optimized for the iphone / itouch, and feels a bit inefficient for the larger ipad. because of the current lack of multitasking (something that apple has already announced will be available in the next version of the os), i can&# ;t keep an im client running in the background. there is no inherent folder system, so saving files outside of applications is more complex then it should be. fingerprints show up much more than i expected, though they wipe away fairly easily with a cloth. the weight ( . lbs) is just enough to make you need to shift how you hold the ipad after a period of time.

    again, here&# ;s the thing: the ipad doesn&# ;t need to be perfect, it needs to be niche. is it niche? ask my laptop bag.

    ]]>
    http://digitallibrarian.org/?feed=rss &# ;p=
    none none none none none none none coyle's information coyle's information comments on the digital age, which, as we all know, is . phil agre and the gendered internet digitization wars, redux women designing ceci n'est pas une bibliothèque use the leader, luke! pamflets the work i, too, want answers i'd like to buy a vowel frbr without fr or br it's "academic" libraryland, we have a problem frbr as a data model google books and mein kampf on reading library journal, september, the work pray for peace two frbrs, many relationships if it ain't broke precipitating forward miseducation transparency of judgment all the (good) books all the books none google memorial consulter le journal navigation le monde - retour à la une se connecter se connecter s’abonner À la une retour à la page d’accueil du monde en continu actualités en ce moment vaccins contre le covid- passe sanitaire climat algérie afghanistan société séries d'été football cinéma toute l’actualité en continu actualités international politique société les décodeurs sport planète sciences m campus le monde afrique pixels santé big browser disparitions podcasts le monde & vous Économie Économie Économie mondiale Économie française médias emploi argent & placements tribunes éco cities le club de l’économie Économie acheminement de plis électoraux : le ministère de l’intérieur résilie son marché avec la société adrexo Économie article réservé à nos abonnés apple accusé d’ouvrir la porte à une surveillance des contenus privés chronique article réservé à nos abonnés cambriolage : quand les factures présentées à son assureur éveillent ses soupçons rafaële rivais tribune article réservé à nos abonnés « la question n’est pas celle d’une intelligence artificielle qui remplace l’expert mais qui le seconde » charles cuvelliez ecole polytechnique de bruxelles, université libre de bruxelles vidéos vidéos les explications les séries enquêtes vidéo : le monde afrique algérie,  : quand la france poussait des musulmanes à retirer leur voile malgré elles – flashback # : planète les images des incendies qui ravagent la grèce et recouvrent athènes d’un nuage de fumée : planète comment le changement climatique va bouleverser l’humanité : société « sidérées » : pourquoi certaines victimes ne réagissent pas aux violences sexuelles opinions opinions editoriaux chroniques analyses tribunes vie des idées tribune article réservé à nos abonnés « ce n’est pas le principe du passe sanitaire à l’hôpital qui pose une question éthique mais ses modalités de mise en œuvre » françois crémieux directeur général de l’ap-hm tribune article réservé à nos abonnés covid-  : « déguiser un refus de vaccination en militantisme pour la défense de l’hôpital public nous semble de la mauvaise foi » collectif tribune article réservé à nos abonnés « refuser le passe sanitaire relève d’un choix individuel. les juifs, eux, n’ont jamais eu le choix » iannis roder professeur agrégé d’histoire, formateur au mémorial de la shoah Éditorial contrôle technique des motos : l’inquiétante reculade d’emmanuel macron culture culture cinéma télévision livres musiques arts scènes entretien article réservé à nos abonnés danielle arbid, cinéaste : « je voudrais qu’on désire les acteurs que je filme » livres article réservé à nos abonnés isabelle creusot, figure du monde de l’édition en sciences humaines, est morte culture « le concorde, la fin tragique du supersonique », sur arte : le dernier vol comme si vous y étiez culture « barbra streisand : naissance d’une diva », sur arte : l’irrésistible ascension d’une star planétaire m le mag m le mag l’actu l’époque le style le monde passe à table voyage mode les recettes du monde m le mag ces participants aux jeux olympiques qui ont pris le chemin de l’exil m le mag article réservé à nos abonnés le château de chantilly, un joyau aux abois l'époque article réservé à nos abonnés s’aimer comme on se quitte : « elle ne veut plus que je voie notre enfant, elle lui lave le cerveau » le monde passe à table le sandwich au haddock et aux herbes : la recette de bertrand grébaut services services le monde accéder à votre compte abonné consulter le journal du jour Évènements abonnés le monde evénements boutique le monde newsletters mémorable : cultiver votre mémoire guides d’achat mots croisés / sudokus jeux-concours abonnés nous contacter gestion des cookies mentions légales conditions générales charte du groupe politique de confidentialité faq décodex : vérifier l’info Élections régionales et départementales services partenaires codes promo formation professionnelle cours d’anglais cours d’orthographe et grammaire conjugaison dictionnaire de citations annonces immobilières prix de l’immobilier jardinage jeux d’arcade paroles de chansons suppléments partenaires #onatousbesoindusud voyage au canada un projet écoresponsable recherche google memorial après la redéfinition de google +, retour sur les (très nombreuses) expériences finalement mises de côté par le groupe. le monde publié le mars à h - mis à jour le juillet à h partage partage désactivé partage désactivé partage désactivé envoyer par e-mail partage désactivé partage désactivé partage désactivé partage désactivé quentin hugon, damien leloup et benjamin benoit voir les contributions partage partage désactivé partage désactivé partage désactivé envoyer par e-mail partage désactivé partage désactivé partage désactivé partage désactivé dans la même rubrique Édition du jour daté du lundi août lire le journal numérique pour soutenir le travail de toute une rédaction, nous vous proposons de vous abonner. pourquoi voyez-vous ce message ? vous avez choisi de refuser le dépôt de cookies lors de votre navigation sur notre site, notamment des cookies de publicité personnalisée. le contenu de ce site est le fruit du travail de journalistes qui vous apportent chaque jour une information de qualité, fiable, complète, et des services en ligne innovants. ce travail s’appuie sur les revenus complémentaires de la publicité et de l’abonnement. s’abonner déjà abonné? connectez-vous rubriques actualités en direct international politique société Économie les décodeurs résultats élections sport planète sciences m campus le monde afrique pixels médias décodex vidéos santé big browser disparitions Éducation argent et placements emploi archives le monde & vous opinions editoriaux chroniques analyses tribunes vie des idées m le mag l’époque le style gastronomie voyage mode les recettes du monde culture cinéma télévision monde des livres musique arts bd services mémorable : cultivez votre mémoire guides d’achat codes promo formation professionnelle cours d’anglais cours d’orthographe et grammaire conjugaison découvrir le jardinage dictionnaire de citations paroles de chansons jeux annonces immobilières prix de l’immobilier avis de décès dans le monde sites du groupe courrier international la société des lecteurs du monde la vie le huffpost l’obs le monde diplomatique télérama talents source sûre le club de l’économie m publicité newsletters du monde recevoir les newsletters du monde applications mobiles sur iphone sur android abonnement s’abonner se connecter consulter le journal du jour Évenements abonnés le monde festival la boutique du monde mentions légales charte du groupe politique de confidentialité gestion des cookies conditions générales aide (faq) suivez le monde facebook youtube twitter instagram snapchat fils rss none none none ranti. centuries.org ranti. centuries.org eternally yours on centuries home articles hello! archives contact keeping the dream alive - freiheit written by ranti - - t : : z i did not recall when the first time i heard it, but i remembered it was introduced by my cousin. this song from münchener freiheit became one of the songs i listen a lot. the lyrics (see below) resonate stronger nowadays. keeping the dream alive (single version) cover by david groeneveld: cover by kim wilde: lyrics: freiheit - keeping the dream alive tonight the rain is falling full of memories of people and places and while the past is calling in my fantasy i remember their faces the hopes we had were much too high way out of reach but we had to try the game will never be over because we're keeping the dream alive i hear myself recalling things you said to me the night it all started and still the rain is falling makes me feel the way i felt when we parted the hopes we had were much too high way out of reach but we have to try no need to hide no need to run 'cause all the answers come one by one the game will never be over because we're keeping the dream alive i need you i love you the game will never be over because we're keeping the dream alive the hopes we had were much too high way out of reach but we had to try no need to hide no need to run 'cause all the answers come one by one the hopes we had were much too high way out of reach but we had to try no need to hide no need to run 'cause all the answers come one by one the game will never be over because we're keeping the dream alive the game will never be over because we're keeping the dream alive the game will never be over… edit lou reed's walk on the wild side written by ranti - - t : : z if my memory serves me right, i heard about this walk on the wild side song (wikipedia) sometime during my college year in the s. of course, the bass and guitar reef were the one that captured my attention right away. at that time, being an international student here in the us, i was totally oblivious with the lyrics and the references on it. when i finally understood what the lyrics are about, listening to the song makes more sense. here's the footage of the walk on the wild side song (youtube) but what prompted me to write this was started by the version that amanda palmer sang for neil gaiman. i was listening to her cd "several attempts to cover songs by the velvet underground & lou reed for neil gaiman as his birthday approaches" and one of the songs was walk on the wild side. i like her rendition of the songs, which prompted me to find it on youtube. welp, that platform does not disappoint; it's a quite a nice piano rendition. of course, like any other platform that wants you to stay there, youtube also listed various walk on the wild side cover songs. one of them is from alice phoebe lou a singer-songwriter. her rendition using a guitar is also quite enjoyable (youtube) and now i have a new singer-songwriter to keep an eye on. among other videos that were listed on youtube is the one that kinda blew my mind, walk on the wild side - the story behind the classic bass intro featuring herbie flowers which explained that those are two basses layered on top of each other. man, what a nice thing to learn something new about this song. :-) edit tao written by ranti - - t : : z read it from the lazy yogi edit on climate change written by ranti - - t : : z read the whole poem edit tv news archive from the internet archive written by ranti - - t : : z i just learned about the existence of the tv news archive (covering news from until the day before today's date) containing news shows from us tv such as pbs, cbs, abc, foxnews, cnn, etc. you can search by the captions. they also have several curated collections like news clips regarding nsa or snippets or tv around the world i think some of you might find this useful. quite a nice collection, imo. edit public domain day (january , ): what could have entered it in and what did get released written by ranti - - t : : z copyright law is messy, yo. we won't see a lot of notable and important works entering public domain here in the us until . other countries, however, got to enjoy many of them first. public domain reviews put a list of creators whose work are entering the public domain for canada, european union (eu), and many other countries (https://publicdomainreview.org/collections/class-of- /.) for those in eu, nice to see h.g. wells name there (if uk do withdraw, this might end up not applicable to them. but, my knowledge about uk copyright law is zero, so, who knows.) as usual, center of study for the public domain from duke university put a list of some quite well-known works that are still under the extended copyright restriction: http://web.law.duke.edu/cspd/publicdomainday/ /pre- . those works would have been entered the public domain if we use the law that was applicable when they were published. i'm still baffled how current copyright hinders research done and published in to be made available freely. greedy publishers… so, thanks to that, usa doesn't get to enjoy many published works yet. "yet" is the operative word here because we don't know what the incoming administration would do on this topic. considering the next potus is a businessman, i fear the worst. i know: gloomy first of the year thought, but it is what it is. on a cheerful side, check the list from john mark ockerbloom on his online books project. it's quite an amazing project he's been working on. of course, there are also writings made available from hathitrust and gutenberg project, among other things. here's to the next days. xoxo edit for written by ranti - - t : : z read the full poem edit light written by ranti - - t : : z “light thinks it travels faster than anything but it is wrong. no matter how fast light travels, it finds the darkness has always got there first, and is waiting for it.” ― terry pratchett, reaper man edit dot-dot-dot written by ranti - - t : : z more about bertolt brecht poem edit assistive technology written by ranti - - t : : z many people would probably think assistive technology (at) are computer software, applications, or tools that are designed to help blind or deaf people. typically, the first thing that one might have in mind was screen readers, braille display, screen magnifier app for desktop reading, or physical objects like hearing aid, wheel chair, or crutches, a lot of people probably won't think glasses as an at. perhaps because glasses can be highly personalized to fit one's fashion style. edit recent popular posts keeping the dream alive - freiheit - - t : : z public domain day (january , ): what could have entered it in and what did get released - - t : : z tao - - t : : z on climate change - - t : : z public domain day (january , ): what could have entered it in and what did get released - - t : : z lou reed's walk on the wild side - - t : : z for - - t : : z light - - t : : z woodchuck - - t : : z © — site powered by strong coffee & pictures of happy puppies this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. subscribe login searching none none none none free range librarian › k.g. schneider's blog on librarianship, writing, and everything else free range librarian k.g. schneider's blog on librarianship, writing, and everything else skip to content about free range librarian comment guidelines writing: clips & samples (dis)association monday, may , walking two roses to their new home, where they would be planted in the front yard. i have been reflecting on the future of a national association i belong to that has struggled with relevancy and with closing the distance between itself and its members, has distinct factions that differ on fundamental matters of values, faces declining national and chapter membership, needs to catch up on the technology curve, has sometimes problematic vendor relationships, struggles with member demographics and diversity,  and has an uneven and sometimes conflicting national message and an awkward at best relationship with modern communications; but represents something important that i believe in and has a spark of vitality that is the secret to its future. i am not, in fact, writing about the american library association, but the american rose society.  most readers of free range librarian associate me with libraries, but the rose connection may be less visible. i’ve grown roses in nine places i’ve lived in the last thirty-plus years, starting with roses planted in front of a rental house in clovis, new mexico, when i was stationed at cannon air force base in the s, and continuing in pots or slices of garden plots as i moved around the world and later, the united states. basically, if i had an outdoor spot to grow in, i grew roses, either in-ground or in pots, whether it was a slice of sunny backyard in wayne, new jersey, a tiny front garden area in point richmond, california, a sunny interior patio in our fake eichler rental in palo alto, or a windy, none-too-sunny, and cold (but still much-appreciated) deck in our rental in san francisco. when sandy and i bought our sweet little house in santa rosa, part of the move involved rolling large garden pots on my radio flyer from our rental two blocks away. some of you know i’m an association geek, an avocation that has waxed as the years have progressed. i join associations because i’m from a generation where that’s done, but another centripetal pull for staying and being involved is that associations, on their own, have always interested me. it’s highly likely that a long time ago, probably when i was stationed in new mexico and, later, germany (the two duty stations where i had the ability to grow roses), that i was a member of the american rose society for two or three years. i infer this because i accumulated, then later recycled, their house magazine, american rose, and i also have vague memories of receiving the annual publication, handbook for selecting roses. early this year i joined the redwood empire rose society and a few weeks after that joined the american rose society. i joined the local society because i was eager to plant roses in our new home’s garden and thought this would be a way to tap local expertise, and was won over by the society’s programming, a range of monthly educational events that ranged from how to sharpen pruning shears to the habits and benefits of bees (a program where the audience puffed with pride, because roses--if grown without toxic chemical intervention–are highly beneficial bee-attracting pollen plants). i joined the national society less out of need than because i was curious about what ars had to offer to people like me who are rose-lovers but average gardeners, and i was also inquisitive about how the society had (or had not) repositioned itself over the years. my own practices around rose gardening have gradually changed, reflecting broader societal trends. thirty years ago, i was an unwitting cog in the agricultural-industrial rose complex. i planted roses that appealed to my senses — attractive, repeat-blooming, and fragrant — and then managed their ability to grow and produce flowers not only through providing the two things all roses need to grow– sun and water — but also through liberal applications of synthetic food and toxic pest and disease products. the roses i purchased were bred for the most part with little regard for their ability to thrive without toxic intervention or for their suitability for specific regions. garden by garden, my behavior changed. i slowly adopted a “thrive or die” mantra. if a rose could not exist without toxic chemical interventions, then it did not belong in my garden, and i would, in rosarian parlance, “shovel-prune” it and replace it with a rose that could succeed with sun, water, good organic food and amendments, and an occasional but not over-fussy attention. eventually, as i moved toward organic gardening and became more familiar with sustainability in general, i absorbed the message that roses are plants, and the soil they grow in is like the food i put in my body: it influences their health. so i had the garden soil tested this winter while i was moving and replacing plants, digging holes that were close to two feet wide and deep. based on the test results, i adjusted the soil accordingly: i used organic soil sulphur to lower the ph, dug in slow-release nitrogen in the form of feathermeal, and bathed the plants in a weak solution of organic liquid manganese. as i now do every spring, when it warmed up a bit i also resumed my monthly treatment of fish fertilizer, and this year, based on local rose advice, in a folksier vein dressed all the bushes with organic worm castings and alfalfa, both known to have good fertilizing capabilities. alfalfa also has a lot of trace nutrients we know less about but appear to be important. princesse charlene de monaco, hybrid tea rose bred by meilland guess what? science is real! nearly all of the rose bushes are measurably larger and more vigorous. carding mill, a david austin rose, went from a medium shrub to a flowering giant. new roses i planted this spring, such as grand dame and pinkerbelle, are growing much more vigorously than last year’s new plantings. some of this is due to the long, gloomy, wet winter, which gave roses opportunities to snake their long roots deeper into the good soil we have in sonoma county; my friends are reporting great spring flushes this year. but roses planted even in the last six weeks, such as princesse charlene de monaco and sheila’s perfume, are taking off like a rocket, so it’s not just the rain or the variety. (you do not need to do all this to grow roses that will please you and your garden visitors, including bees and other beneficial insects. i enjoy the process. the key thing is that nearly all of my roses are highly rated for disease resistance and nearly all are reported to grow well in our region.) science–under attack in our national conversations–is also an area of conflict within the ars. presidents of the ars have three-year terms, and the previous president, pat shanley, was an advocate of sustainable rose growing. she spoke and wrote about the value of organic gardening, and championed selecting varieties that do not require toxic intervention to thrive. the theme of the american rose annual was “roses are for everyone,” and this annual is a fascinating look at the sustainable-gardening wing of the ars. most of the articles emphasized the value of what paul zimmerman, a rose evangelist, calls “garden roses,” flowers that everyday people like you and me can grow and enjoy. the message in this annual is reinforced by recent books by longtime rose advocates and ars members, such as peter kukielski’s roses without chemicals and zimmerman’s everyday roses, books i highly recommend for library collections as well as personal use. (roses without chemicals is a book i use when i wake up at odd hours worried about things, because it is beautifully written and photographed and the roses are listed alphabetically.) now the ars has a new president, bob martin, a longtime exhibitor, who in editorials has promoted chemical intervention for roses. “and yes virginia we do spray our roses,” he wrote in the march/april “first word” editorial in american rose, the house organ of the ars. “as does nearly every serious rose exhibitor and those who want their rose bushes to sustainably produce the most beautiful blooms [emphasis mine].” american rose does not appear to publish letters to the editor. there is no section listed for letters that i can find in any recent issue, and the masthead only lists a street address for “member and subscription correspondence.” otherwise, i would write a short letter protesting the misuse of the term “sustainably,” as well as the general direction of this editorial. i am a rose amateur, and make no bones about it. but i know that equating chemical spraying with sustainability is, hands-down, fake news. it’s one thing to soak roses in toxins and call it a “health maintenance” program, as he does in this article. that’s close to the line but not over it, since he’s from the exhibitors’ wing of ars. but it’s just plain junk science to claim that there is anything connected to sustainability about this approach. i also can’t imagine that this “toxins forever” message is attracting new ars members or encouraging them to renew. it feels disconnected from what motivates average gardeners like me to grow roses today (to enjoy them in their gardens) and from how they want to grow them today (in a manner that honors the earth). frankly, one of the happiest moments in my garden last year was not from personal enjoyment of the flowers or even the compliments of neighbors and passers-by, but when i saw bees doing barrel-rolls in the stamens of my roses, knowing that i was helping, not hurting, their survival. the vast majority of people buying and planting roses these days have no idea there is a single-plant society dedicated to this plant, or even less that this society believes it understands their motivations for and interest in roses. my environmental scan of the literature and the quantities of roses provided by garden stores makes me suspect that many people buy roses based on a mix of personal recommendations, marketing guidance (what the vendors are promoting), and what they remember from their family gardens. (i would love to learn there had been market research in this area; vendors may have taken this up.) for average gardeners, their memories include roses such as peace and mr. lincoln, which were bred in the middle of the last century, when the focus was not on disease resistance but on producing the hourglass hybrid tea shape that became the de facto standard for exhibiting. we can get sentimental about roses from the late th century, but many of these varieties also helped perpetuate the idea that roses are hard to grow, despite the many varieties that grew just fine for thousands of years (or in the case of excellenz von schubert, which i planted this year, years and counting). market persuasion continues today; vendors tempt buyers through savvy marketing plans such as the downton abbey rose series from weeks or david austin’s persistent messaging about “english” roses. note — i own a lovely rose from the downton abbey line, violet’s pride, that is quite the garden champ, and have three david austin roses (carding mill, munstead wood, and gentle hermione). i’m just noting market behavior. it is well-documented in rose literature that the rose that seems to have shaken the ars to the core is the knockout series, which introduced maintenance-free roses to a generation short on time and patience and increasingly invested in sustainable practices throughout their lives, including their gardens. again, smart marketing was part of the formula, because there always have been sustainable roses, and ome companies, such as kordes, moved to disease-resistant hybridizing decades ago. but the knockout roses were promoted as an amazing breakthrough. (it may help to know that new varieties of roses have -year patents during which propagation is only legally through license. i don’t begrudge hybridizers their income, given how much work–sometimes thousands of seedlings–goes into producing a single good rose, but this does factor into how and why roses are marketed.) you don’t need a certificate as a master gardener or membership in a rose society to grow knockout roses or newer competitors such as the oso easy line. you don’t really need to know anything about roses at all, other than roses grow in sun, not shade, and appreciate water. you also don’t need to spray knockout roses with powerful fungicides to prevent blackspot and mildew. regardless of the public’s reaction to easy-to-grow roses, the rose world’s reception of the knockout rose by the rose world was mixed, to use an understatement. though the knockout rose was the ars members’ choice rose, rumblings abounded, and knockout was even blamed in popular literature as a vector for the rose rosette virus (rrv), though this was later debunked. fifty years ago rrv was observed in a number of rose varieties, long before the knockout rose appeared. (this mite-spread virus was promulgated in the united states to control a pest rose, rosa multiflora, that was itself introduced without realizing what havoc it would wreak.) again, i’m no scientist, but i would think the appearance of rrv in “domesticated” roses was inevitable, regardless of which rose variety was first identified by name as carrying this disease. rose hybridizing is now catching up with the public’s interests and the wider need for roses with strong disease resistance. rose companies prominently tout disease resistance and many new varieties can be grown toxin-free. i selected princesse charlene de monaco in part because it medaled as best hybrid tea in the biltmore international rose trials, for which roses must perform well in terms of vigor and disease resistance as well as aesthetic qualities. there were companies such as kordes who walked this walk before it was fashionable, but in typical change-adoption fashion, other vendors are adapting their own practices, because the market is demanding it. but association leadership is driven by different goals than that for for-profit companies. a colleague of mine, after sharing his support for my successful run for ala executive board, commented that it takes expertise to run a $ million organization–skills not everyone has in equal abundance. my further reflection is that the kind of leadership we need at any one time is also unique to that moment, though–with absolutely no aspersions on our current crop of excellent leaders in ala–historically, we have not always selected leadership for either general expertise or current needs, an issue hardly unique to ars or ala. so i watch the ars seesaw. as just one more example, recently i read an article within the same ars email newsletter touting the value of lacewings for insect management, followed by an article about the value of chemical interventions that i know are toxic to beneficial insects. these aren’t just contradictory ideas; they are contradictory values, contradictory messages, and contradictory branding. and these conflicting messages are evident even before we look at the relationship between the national association and local societies (organized differently than ala chapters but with the similar intent). if i could deduce the current priorities for ars from its magazine, website, and email newsletters, it would be the renovation of the ars garden in shreveport. the plan to update the -year-old “national rosarium” makes sense, if you like rose gardens, but it sounds more like a call to the passionate few than the general public. it’s hard to infer other priorities when website sections such as “cyber rosarian” invite members to ask questions that then go unanswered for over a year. the section called “endorsed products” is its own conflicted mix of chemical interventions, artificial fertilizers, and organic rose food. the website section on rose preservation–a goal embedded in the ars mission statement, “the american rose society exists to promote the culture, preservation and appreciation of the rose”–is a blank page with a note it is under construction. a section with videos by paul zimmerman is useful, but the rose recommendations by district are incomplete, and also raise the issue that ars districts are organized geopolitically, not by climate. a rose suited for the long dry summers of sonoma county may not do as well in maui. the ars “modern roses” database has value, listing over , cultivars. but if i want insight into a specific rose, i use helpmefind.com, which despite its generic name and rustic interface is the de facto go-to site for rose information, questions, and discussion, often in the context of region, climate, and approaches to sustainability. i pay a small annual fee for premium access, in part to get hmf’s extra goodies (advanced search, and access to lineage information) but primarily because this site gives me value and i want to support their work. though i couldn’t find data on the ars website for membership numbers in national, district, or local societies, i intuit membership overall is declining. it is in our local society, where despite great programming in a region where many people grow roses, i am one of the younger members. again, there are larger forces at work with association membership, but pointing to those forces and then doing business as usual is a recipe for slow death. interestingly, the local rose society is aware of its challenges and interested in what it might mean to reposition itself for survival. most recently, we founded a facebook group that anyone could join (look for redwood empire rose society). but the society doesn’t have very much time, and a facebook group isn’t the magic bullet. to loop back to ala for a moment: i can remember when the response to concerns about membership decline were that the library field was contracting as a whole and association membership was also less popular in general. but these days, ala is invested in moving past these facts and asking, what then? ala is willing to change to survive. and i believe that is why ala will be around years from now, assuming we continue to support human life on this continent. as i ponder all this, deep in my association geekiness, i’m left with these questions: if the ars can’t save itself, who will be there for the roses? will the ad hoc, de facto green-garden rosarians form a new society, will they simply soldier on as a loose federation, or will the vendors determine the future of roses? have rose societies begun talking about strategic redirection, consolidation, and other new approaches? does the ars see itself as a change leader? where does the ars see itself in years? am i just a naive member in the field, totally missing the point, or is there something to what i’m observing, outside the palace walls? i’ve been writing this off and on for months. it’s memorial day and it’s now light enough outside to wander into our front yard, pruners and deadheading bucket in hand, iphone in my pocket so i can share what bloomed while i slept. over time i changed how i grow roses, but not why i grow roses. somewhere in there is an insight, but it’s time to garden. bookmark to: filed in uncategorized | | comments off on (dis)association i have measured out my life in doodle polls wednesday, april , you know that song? the one you really liked the first time you heard it? and even the fifth or fifteenth? but now your skin crawls when you hear it? that’s me and doodle. in the last three months i have filled out at least a dozen doodle polls for various meetings outside my organization. i complete these polls at work, where my two-monitor setup means i can review my outlook calendar while scrolling through a doodle poll with dozens of date and time options. i don’t like to inflict doodle polls on our library admin because she has her hands full enough, including managing my real calendar. i have largely given up on earmarking dates on my calendar for these polls, and i just wait for the inevitable scheduling conflicts that come up. some of these polls have so many options i would have absolutely no time left on my calendar for work meetings, many of which need to be made on fairly short notice. not only that, i gird my loins for the inevitable “we can’t find a date, we’re doodling again” messages that mean once again, i’m going to spend minutes checking my calendar against a doodle poll. i understand the allure of doodle; when i first “met” doodle, i was in love. at last, a way to pick meeting dates without long, painful email threads! but we’re now deep into the tragedy of the doodle commons, with no relief in sight. here are some doodle ideas–you may have your own to toss in. first, when possible, before doodling, i ask for blackout dates. that narrows the available date/time combos and helps reduce the “we gotta doodle again” scenarios. second, if your poll requires more than a little right-scrolling, reconsider how many options you’re providing. a poll with options might as well be asking me to block out april. and i can’t do that. third, i have taken exactly one poll where the pollster chose to suppress other people’s responses, and i hope to never see that again. there is a whole gaming side to doodling in which early respondents get to drive the dates that are selected, and suppressing other’s responses eliminates that capability. plus i want to know who has and hasn’t responded, and yes, i may further game things when i have that information. also, if you don’t have to doodle, just say no. bookmark to: filed in uncategorized | | comments ( ) memento dmv saturday, march , this morning i spent minutes in the appointment line at the santa rosa dmv to get my license renewed and converted to real id, but was told i was “too early” to renew my license, which expires in september, so i have to return after i receive my renewal notice. i could have converted to real id today, but i would still need to return to renew my license, at least as it was explained to me, and i do hope that was correct. cc by . , https://wellcomecollection.org/works/m wh kmc but–speaking as a librarian, and therefore from a profession steeped in resource management–i predict chaos in if dmv doesn’t rethink their workflow. we’re months out from october , the point at which people will not be able to board domestic flights if they don’t have a real id or a valid passport, or another (and far less common) substitute. then again, california dmv is already in chaos. their longtime leader retired, the replacement lasted days, and their new leader has been there ca. days. last year featured the license renewal debacle, which i suspect impacted the man standing behind me. he said he was there to apply for his license again because he never received the one he applied for last fall. and california dmv is one of states that still needs a real id extension because it didn’t have it together on time. indeed, i was on the appointment line, and nearly everyone in that line was on their second visit to dmv for the task they were trying to accomplish, and not for lack of preparation on their part. some of that was due to various dmv crises, and some of it is baked into dmv processes. based on how their current policies were explained to me today at window , i should never have been on that line in the first place; somewhere, in the online appointment process, the dmv should have prevented me from completing that task. i needlessly took up staff time at dmv. but the bigger problem is a system that gets in its own way, like libraries that lock book drops during the day to force users to enter the libraries to return books. with me standing there at window with my online appointment, my license, and my four types of id, the smart thing to do would be to complete the process and get me out of the pipeline of real id applicants–or any other dmv activity. but that didn’t happen. and i suspect i’m just one drop in a big, and overflowing, bucket. i suppose an adroit side move is to ensure your passport is current, but i hope we don’t reach the point where we need a passport to travel in our own country. bookmark to: filed in uncategorized | | comments off on memento dmv an old-skool blog post friday, march , i get up early these days and get stuff done — banking and other elder-care tasks for my mother, leftover work from the previous day, association or service work. a lot of this is writing, but it’s not writing. i have a half-dozen unfinished blog posts in wordpress, and even more in my mind. i map them out and they are huge topics, so then i don’t write them. but looking back at the early days of this blog — years ago! — i didn’t write long posts. i still wrote long-form for other media, but my blog posts were very much in the moment. so this is an old-skool post designed to ease me back in the writing habit. i’ll strive for twice a week, which is double the output of the original blogger, samuel johnson. i’ll post for minutes and move on to other things. i am an association nerd, and i spend a lot of time thinking about associations of all kinds, particularly the american library association, the american homebrewers association, the american rose society, the redwood empire rose society, the local library advisory boards, my church, and our neighborhood association. serving on the ala steering committee on organizational effectiveness, i’m reminded of a few indelible truths. one is that during the change management process you need to continuously monitor the temperature of the association you’re trying to change and in the words of one management pundit, keep fiddling with the thermostat. an association didn’t get that big or bureaucratic overnight, and it’s not going to get agile overnight, either. another is that the same people show up in each association, and–more interesting to me–stereotypes are not at play in determining who the change agents are. i had a great reminder of that years ago, when i served as the library director for one of those tiny barbie dream libraries in upstate new york, and i led the migration from a card catalog to a shared system in a consortium. too many people assumed that the library staff–like so many employees in these libraries, all female, and nearly all older women married to retired spouses–would be resistant to this change. in fact, they loved this change. they were fully on board with the relearning process and they were delighted and proud that they were now part of a larger system where they could not only request books from other libraries but sometimes even lend books as well from our wee collection. there were changes they and the trustees resisted, and that was a good lesson too, but the truism of older women resisting technology was dashed against the rocks of reality. my minutes are up. i am going in early today because i need to print things, not because i am an older woman who fears technology but because our home printer isn’t working and i can’t trust that i’ll have seatback room on my flight to chicago to open my laptop and read the ala executive board manual electronically, let alone annotate it or mark it up. i still remember the time i was on a flight, using my rpod (red pen of death, a fine-point red-ink sharpie) to revise an essay, and the passenger next to me turned toward me wide-eyed and whispered, “are you a teacher?” such is the power of rpod, an objective correlative that can immediately evoke the fear of correction from decades ago. bookmark to: filed in american liberry ass'n, association nerd | | comments ( ) keeping council saturday, january , editorial note: over half of this post was composed in july . at the time, this post could have been seen as politically neutral (where ala is the political landscape i’m referring to) but tilted toward change and reform. since then, events have transpired. i revised this post in november, but at the time hesitated to post it because events were still transpiring. today, in january , i believe even more strongly in what i write here, but take note that the post didn’t have a hidden agenda when i wrote it, and, except where noted, it still reflects my thoughts from last july, regardless of ensuing events. my agendas tend to be fairly straightforward. — kgs   original post, in which councilors are urged to council edits in noted with bolding. as of july , i am back on ala council for my fifth (non-consecutive) term since joining the american library association in . in june i attended council orientation, and though it was excellent–the whole idea that councilors would benefit from an introduction to the process is a beneficial concept that emerged over the last two decades–it did make me reflect on what i would add if there had been a follow-on conversation with sitting councilors called “sharing the wisdom.” i was particularly alerted to that by comments during orientation which pointed up a traditional view of the council process where ala’s largest governing body is largely inactive for over days a year, only rousing when we prepare to meet face to face. take or leave what i say here, or boldly contradict me, but it does come from an abundance of experience. you are a councilor year-round most newly-elected councilors “take their seats” immediately after the annual conference following their election — a factoid with significance. council, as a body, struggles with being a year-round entity that takes action twice a year during highly-condensed meetings during a conference with many other things happening. i have written about this before, in a dryly wonky post from that also addresses council’s composition and the role of chapters. i proposed that council meet four times a year, in a solstice-and-equinox model. two of those meetings (the “solstice” meetings) could  be online. (as far back as i was hinting around about the overhead and carbon footprint of midwinter.) i doubt midwinter will go to an online format even within the next decade–it’s a moneymaker for ala, if less so than before, and ala’s change cycle is glacial–but the proposal was intended to get people thinking about how council does, and doesn’t, operate. in lieu of any serious reconsideration of council, here are some thoughts. first, think of yourself as a year-round councilor, even if you do not represent a constituency such as a state chapter or a division that meets and takes action outside of ala. have at least a passing familiarity with the ala policy manual. bookmark it and be prepared to reference it. get familiar with ala’s financial model through the videos that explain things such as the operating agreement. read and learn about ala. share news. read the reports shared on the list, and post your thoughts and your questions. think critically about what you’re reading. it’s possible to love your association, believe with your heart that it has a bright future, and still raise your eyebrows about pat responses to budget questions, reassurances that membership figures and publishing revenue will rebound, and glib responses about the value of units such as the planning and budget assembly. come to council prepared. read everything you can in advance, speak with other councilors, and apply solid reflection, and research if needed, before you finish packing for your trip. preparation requires an awareness that you will be deluged with reading just as you are struggling to button up work at your library and preparing to be away for nearly a week, so skimming is essential. i focus on issues where i know i can share expertise, and provide input when i can. also, i am proud we do memorial resolutions and other commemorations but i don’t dwell on them in advance unless i have helped write them or had close familiarity with the people involved. fee, fie, foe, forum coming prepared to council is one of those values council has struggled with. looking at the council list for the week prior to annual , the only conversation was a discussion about the relocation of the council forum meeting room from one hotel to another, complete with an inquiry asking if ala could rent a special bus to tote councilors to and from the forum hotel. council forum is an informal convening that has taken place for decades to enable council to discuss resolutions and other actions outside of the strictures of parliamentary procedure. it meets three times during ala, in the evening, and though it is optional, i agree with the councilor who noted that important work happens at this informal gathering. i am conflicted about forum. it allows substantive discussion about key resolutions to happen outside of the constrictive frameworks of parliamentary procedure. forum is also well-run, with volunteer councilors managing the conversation. but forum also appears to have morphed into a substitute for reading and conversation in advance. it also means that councilors have to block out yet more time to do “the work of the association,” which in turn takes us away from other opportunities during the few days we are together as an association. i don’t say this to whine about the sacrifice of giving up dinners and networking with ala colleagues, though those experiences are important to me, but rather to point out that forum as a necessary-but-optional council activity takes a silo–that brobdingnabian body that is ala council–and further silos it. that can’t be good for ala. as councilors, we benefit from cross-pollination with the work of the association. resolved: to tread lightly with resolutions new councilors, and i was one of them once, are eager to solve ala’s problems by submitting resolutions. indeed, there are new councilors who see resolutions as the work of council, and there have been round tables and other units that clearly saw their work as generating reams of lightly-edited, poorly-written resolutions just prior to and during the conference. there are at least three questions to ask before submitting a resolution (other than memorial and other commemorative resolutions): can the resolution itself help solve a problem? has it been coordinated with the units and people involved in the issue it addresses? is it clear and well-written? there are other questions worth considering, such as, if the issue this resolution proposed to address cropped up a month after council met, would you still push it online with your council colleagues, or ask the ala executive board to address it? which is another way to ask, is it important? tread lightly with twitter overall, since coming through the stress of living through the santa rosa fires, i’m feeling weary, and perhaps wary, of social media. though i appreciate the occasional microbursts taking on idiots insulting libraries and so on, right now much of social media feels at once small and overwrought. if i seem quieter on social media, that’s true. (but i have had more conversations with neighbors and area residents during and after the fires than i have since we moved to santa rosa in early , and those convos are the real thing.) more problematically, as useful as twitter can be for following real-world issues–including ala–twitter also serves as a place where people go to avoid the heavy lifting involved with crucial conversations. i find i like #alacouncil twitter best when it is gently riffing on itself or amplifying action that the larger ala body would benefit hearing about. [the following, to the end of this post, is all new content] i like #alacouncil twitter least when it is used as a substitute for authentic conversation, used to insult other councilors, or otherwise undermining the discourse taking place in the meatware world. twitter is also particularly good at the unthinking pile-on, and many people have  vulnerabilities in this area that are easily exploited. sometimes those pile-ons hit me close to home, as happened a little over a year ago. other times these pile-ons serve only to amuse the minx in me, such as when a famous author (™) recently scolded me for “trafficking in respectability politics” because i was recommending a list of books written by writers from what our fearless leader calls “s–thole countries.” guilty as charged! indeed, i have conducted two studies where a major theme was “do i look too gay?” i basically have a ph.d. in respectability politics. and like all writers–including famous author (™)–i traffic in them. i chuckled and walked on by. walking on by, on twitter, takes different forms. as an administrator, i practice a certain pleasant-but-not-sugary facial expression that stays on my face regardless of what’s going on in my head. i’m not denying my emotions, which would be the sugary face; i’m managing them. it’s a kind of discipline that also helps me fjord difficult conversations, in which the discipline of managing my face also helps me manage my brain. the equivalent of my admin face for me for #alacouncil twitter is to exercise the mute button. i have found it invaluable. people don’t know they are muted (or unmuted). if only real life had mute buttons–can you imagine how much better some meetings would be if you could click a button and the person speaking would be silenced, unaware that you couldn’t hear them? everyone wins. but that aside, i have yet to encounter a situation on twitter when–for me–muting was the wrong call. it’s as if you stepped off the elevator and got away from that person smacking gum. another car will be along momentarily. my last thought on this post has to do with adding the term “sitting” before councilors in the first part of this post. when i was not on council i tried very hard not to be “that” former councilor who is always kibitizing behind scene, sending councilors messages about how things should be and how, in the s, ala did something bad and therefore we can never vote online because nobody knows how to find ala connect and it’s all a nefarious plot hatched by the ala president, his dimwitted sycophants, and the executive board, and why can’t my division have more representation because after all we’re the -pound gorilla (ok, i just got political, but you’ll note i left out anything about what should or should not be required for a very special job). yes, once in a while i sent a note if i thought it was helpful, the way some of my very ala-astute friends will whisper in my ear about policy and process i may be unfamiliar with. michael golrick, a very connected ala friend of mine, must have a third brain hemisphere devoted to the ala policy manual and bylaws. and during a time when i was asking a lot of questions about the ala budget (boiling down to one question: who do you think you’re fooling?), i was humbled by the pantheon of ala luminaries whispering in my ear, providing encouragement as well as crucial guidance and information. but when i am no longer part of something, i am mindful that things can and should change and move on, and that i may not have enough information to inform that change. we don’t go to ala in horse-and-buggies any more, but we conduct business as if we do, and when we try to change that, the fainting couches are rolled out and the smelling salts waved around as if we had, say, attempted to change the ala motto, which is, i regret to inform you, “the best reading, for the largest number, at the least cost”–and yes, attempts to change that have been defeated. my perennial question is, if you were starting an association today, how would it function? if the answer is “as it did in ” (when that motto was adopted), perhaps your advice on a current situation is less salient than you fancy. you may succeed at what you’re doing, but that doesn’t make you right. and with that, i go off to courthouse square today to make exactly that point about events writ much, much larger, and of greater significance, than our fair association. but i believe how we govern makes a difference, and i believe in libraries and library workers, and i believe in ala. especially today. bookmark to: filed in american liberry ass'n, librarianship | | comments ( ) what burns away thursday, november , we are among the lucky ones. we did not lose our home. we did not spend day after day evacuated, waiting to learn the fate of where we live. we never lost power or internet. we had three or four days where we were mildly inconvenienced because pg&e wisely turned off gas to many neighborhoods, but we showered at the ymca and cooked on an electric range we had been planning to upgrade to gas later this fall (and just did, but thank you, humble frigidaire electric range, for being there to let me cook out my anxiety). we kept our go-bags near the car, and then we kept our go-bags in the car, and then, when it seemed safe, we took them out again. that, and ten days of indoor living and wearing masks when we went out, was all we went through. but we all bear witness. the foreshadowing it began with a five-year drought that crippled forests and baked plains, followed by an soaking-wet winter and a lush  spring that crowded the hillsides with greenery. summer temperatures hit records several times, and the hills dried out as they always do right before autumn, but this time unusually crowded with parched foliage and growth. the air in santa rosa was hot and dry that weekend, an absence of humidity you could snap between your fingers. in the southwest section of the city, where we live, nothing seemed unusual. like many homes in santa rosa our home does not have air conditioning, so for comfort’s sake i grilled our dinner, our -foot backyard fence buffering any hint of the winds gathering speed northeast of us. we watched tv and went to bed early. less than an hour later one of several major fires would be born just miles east of where we slept. reports vary, but accounts agree it was windy that sunday night, with windspeeds ranging between and miles per hour, and a gust northwest of santa rosa reaching nearly miles per hour. if the diablo winds were not consistently hurricane-strength, they were exceptionally fast, hot, and dry, and they meant business. a time-lapse map of calls shows the first reports of downed power lines and transformers coming in around pm.  the tubbs fire was named for a road that is named for a th-century winemaker who lived in a house in  calistoga that burned to the ground in an eerily similar fire in . in three hours this fire sped miles southwest, growing in size and intent as it gorged on hundreds and then thousands of homes in its way, breaching city limits and expeditiously laying waste to homes in the fountaingrove district before it tore through the journey’s end mobile home park, then reared back on its haunches and leapt across a six-lane divided section of highway , whereupon it gobbled up big-box stores and fast food restaurants flanking cleveland avenue, a business road parallel to the highway.  its swollen belly, fat with miles of fuel, dragged over the area and took out buildings in the  the random manner of fires. kohl’s and kmart were totaled and trader joe’s was badly damaged, while across the street from kmart, joann fabrics was untouched. the fire demolished one mexican restaurant, hopscotched over another, and feasted on a gun shop before turning its ravenous maw toward the quiet middle-class neighborhood of coffey park, making short work of thousands more homes. santa rosa proper is itself only square miles, approximately miles north-south and miles east-west, including the long tail of homes flanking the annadel mountains. by the time kohl’s was collapsing, the “wildfire” was less than miles from our home. i woke up around am, which i tend to do a lot anyway. i walked outside and smelled smoke, saw people outside their homes looking around, and went on twitter and facebook. there i learned of a local fire, forgotten by most in the larger conflagration, but duly noted in brief by the press democrat: a large historic home at th and pierson burned to the ground, possibly from  a downed transformer, and the fire licked the edge of the santa rosa creek trail for another feet. others in the west end have reported the same experience of reading about the th street house fire on social media and struggling to reconcile the reports of this fire with reports of panic and flight from areas north of us and videos of walls of flame. at am i received a call that the university had activated its emergency operations center and i asked if i should report in. i showered and dressed, packed a change of clothes in a tote bag, threw my bag of important documents in my purse, and drove south on my usual route to work, petaluma hill road. the hills east of the road flickered with fire, the road itself was packed with fleeing drivers, and halfway to campus i braked at mph when a massive buck sprang inches in front of my car, not running in that “oops, is this a road?” way deer usually cross lanes of traffic but yawing too and fro, its eyes wide. i still wonder, was it hurt or dying. as i drove onto campus i thought, the cleaning crew. i parked at the library and walked through the building, already permeated with smoky air. i walked as quietly as i could, so that if they were anywhere in the building i would hear them. as i walked through the silent building i wondered, is this the last time i will see these books? these computers? the new chairs i’m so proud of? i then went to the eoc and found the cleaning crew had been accounted for, which was a relief. at least there was food and beer a few hours later i went home. we had a good amount of food in the house, but like many of us who were part of this disaster but not immediately affected by it, i decided to stock up. the entire santa rosa marketplace– costco and trader joe’s, target–on santa rosa avenue was closed, and oliver’s had a line outside of people waiting to get in. i went to the “g&g safeway”–the one that took over a down-at-the-heels family market known as g&g and turned it into a spiffy market with a wine bar, no less–and it was without power, but open for business and, thanks to a backup system, able to take atm cards. i had emergency cash on me but was loathe to use it until i had to. sweating through an n mask i donned to protect my lungs, i wheeled my cart through the dark store, selecting items that would provide protein and carbs if we had to stuff them in our go-bags, but also fresh fruit and vegetables, dairy and eggs–things i thought we might not see for a while, depending on how the disaster panned out. (note, we do already have emergency food, water, and other supplies.) the cold case for beer was off-limits–safeway was trying to retain the cold in its freezer and fridge cases in case it could save the food–but there was a pile of cases of lagunitas lil sumpin sumpin on sale, so that with a couple of bottles of local wine went home with me too. and with one wild interlude, for most of the rest of the time we stayed indoors with the windows closed.  i sent out email updates and made phone calls, kept my phone charged and read every nexil alert, and people at work checked in with one another. my little green library emergency contact card stayed in my back pocket the entire time. we watched tv and listened to the radio, including extraordinary local coverage by ksro, the little station that could; patrolled newspapers and social media; and rooted for sheriff rob, particularly after his swift smack-down of a bogus, breitbart-fueled report that an undocumented person had started the fires. our home was unoccupied for a long time before we moved in this september, possibly up to a decade, while it was slowly but carefully upgraded. the electric range was apparently an early purchase; it was a line long discontinued by frigidaire, with humble electric coils. but it had been unused until we arrived, and was in perfect condition. if an electric range could express gratitude for finally being useful, this one did. i used it to cook homey meals: pork loin crusted with smithfield bacon; green chili cornbread; and my sui generis meatloaf, so named because every time i make it, i grind and add meat scraps from the freezer for a portion of the meat mixture. (it would be several weeks before i felt comfortable grilling again.) we cooked. we stirred. we sauteed. we waited. on wednesday, we had to run an errand. to be truthful, it was an amazon delivery purchased that saturday, when the world was normal, and sent to an amazon locker at the capacious whole foods at coddington mall, a good place to send a package until the mall closes down because the northeast section of the city is out of power and threatened by a massive wildfire. by wednesday, whole foods had reopened, and after picking up my silly little order–a gadget that holds soda cans in the fridge–we drove past russian river brewing company and saw it was doing business, so we had salad and beer for lunch, because it’s a luxury to have beer at lunch and the fires were raging and it’s so hard to get seating there nights and weekends, when i have time to go there, but there we were. we asked our waiter how he was doing, and he said he was fine but he motioned to the table across from ours, where a family was enjoying pizza and beer, and he said they had lost their homes. there were many people striving for routine during the fires, and to my surprise, even the city planning office returned correspondence regarding some work we have planned for our new home, offering helpful advice on the permitting process required for minor improvements for homes in historic districts. because it turns out developers and engineers could serenely ignore local codes and build entire neighborhoods in santa rosa in areas known to be vulnerable to wildfire; but to replace bare dirt with a little white wooden picket fence, or to restore front windows from s-style plate glass to double-hung wooden windows with mullions–projects intended to reinstate our house to its historic accuracy, and to make it more welcoming–requires a written justification of the project, accompanying photos, “proposed elevations (with landscape plan if you are significantly altering landscape) ( copies),” five copies of a paper form, a neighborhood context and vicinity map provided by the city, and a check for $ , followed by “ - weeks” before a decision is issued. the net result of this process is like the codes about not building on ridges, though much less dangerous; most people ignore the permitting process, so that the historic set piece that is presumably the goal is instead rife with anachronisms. and of course, first i had to bone up on the residential building code and the historic district guidelines, which contradict one another on key points, and because the permitting process is poorly documented i have an email traffic thread rivaling in word count byron’s letters to his lovers. but the planning people are very pleasant, and we all seemed to take comfort in plodding through the administrivia of city bureaucracy as if we were not all sheltering in place, masks over our noses and mouths, go-bags in our cars, while fires raged just miles from their office and our home. the wild interlude, or, i have waited my entire career for this moment regarding the wild interlude, the first thing to know about my library career is that nearly everywhere i have gone where i have had the say-so to make things happen, i have implemented key management. that mishmosh of keys in  a drawer, the source of so much strife and arguments, becomes an orderly key locker with numbered labels. it doesn’t happen overnight, because keys are control and control is political and politics are what we tussle about in libraries because we don’t have that much money, but it happens. sometimes i even succeed in convincing people to sign keys out so we know who has them. other times i convince people to buy a locker with a keypad so we sidestep the question of where the key to the key locker is kept. but mostly, i leave behind the lockers, and, i hope, an appreciation for lockers. i realize it’s not quite as impressive as founding the library of alexandria, and it’s not what people bring up when i am introduced as a keynote speaker, and i have never had anyone ask for a tour of my key lockers nor have i ever been solicited to write a peer-reviewed article on key lockers. however unheralded, it’s a skill. my memory insists it was tuesday, but the calendar says it was late monday night when i received a call that the police could not access a door to an area of the library where we had high-value items. it would turn out that this was a rogue lock, installed sometime soon after the library opened in , that unlike others did not have a master registered with the campus, an issue we have since rectified. but in any event, the powers that be had the tremendous good fortune to contact the person who has been waiting her entire working life to prove beyond doubt that key lockers are important. after a brief internal conversation with myself, i silently nixed the idea of offering to walk someone through finding the key. i said i knew where the key was, and i could be there in twenty minutes to find it. i wasn’t entirely sure this was the case, because as obsessed as i am with key lockers, this year i have been preoccupied with things such as my deanly duties, my doctoral degree completion, national association work, our home purchase and household move, and the selection of geegaws like our new gas range (double oven! center griddle!). this means i had not spend a lot of time perusing this key locker’s manifest. so there was an outside chance i would have to find the other key, located somewhere in an another department, which would require a few more phone calls. i was also in that liminal state between sleep and waking; i had been asleep for two hours after being up since am, and i would have agreed to do just about anything. within minutes i was dressed and again driving down petaluma hill road, still busy with fleeing cars.  the mountain ridges to the east of the road roiled with flames, and i gripped the steering wheel, watching for more animals bolting from fire. once in the library, now sour with smoke, i ran up the stairs into my office suite and to the key locker, praying hard that the key i sought was in it. my hands shook. there it was, its location neatly labeled by the key czarina who with exquisite care had overseen the organization of the key locker. the me who lives in the here-and-now profusely thanked past me for my legacy of key management, with a grateful nod to the key czarina as well. what a joy it is to be able to count on people! items were packed up, and off they rolled. after a brief check-in at the eoc, home i went, to a night of “fire sleep”–waking every minutes to sniff the air and ask, is fire approaching?–a type of sleep i would have for the next ten days, and occasionally even now. how we speak to one another in the here and now every time sandy and i interact with people, we ask, how are you. not, hey, how are ya, where the expected answer is “fine, thanks” even if you were just turned down for a mortgage or your mother died. but no, really, how are you. like, fire-how-are-you. and people usually tell you, because everyone has a story. answers range from: i’m ok, i live in petaluma or sebastopol or bodega bay (in soco terms, far from the fire), to i’m ok but i opened my home to family/friends/people who evacuated or lost their homes; or, i’m ok but we evacuated for a week; or, as the guy from home depot said, i’m ok and so is my wife, my daughter, and our cats, but we lost our home. sometimes they tell you and they change the subject, and sometimes they stop and tell you the whole story: when they first smelled smoke, how they evacuated, how they learned they did or did not lose their home. sometimes they have before-and-after photos they show you. sometimes they slip it in between other things, like our cat sitter, who mentioned that she lost her apartment in fountaingrove and her cat died in the fire but in a couple of weeks she would have a home and she’d be happy to cat-sit for us. now, post-fire, we live in that tritest of phrases, a new normal. the library opened that first half-day back, because i work with people who like me believe that during disasters libraries should be the first buildings open and the last to close. i am proud to report the library also housed nomacares, a resource center for those at our university affected by the fire. that first friday back we held our library operations meeting, and we shared our stories, and that was hard but good. but we also resumed regular activity, and soon the study tables and study rooms were full of students, meetings were convened, work was resumed, and the gears of life turned. but the gears turned forward, not back. because there is no way back. i am a city mouse, and part of moving to santa rosa was our decision to live in a highly citified section, which turned out to be a lucky call. but my mental model of city life has been forever twisted by this fire. i drive on just four miles north of our home, and there is the unavoidable evidence of a fire boldly leaping into an unsuspecting city. i go to the fabric store, and i pass twisted blackened trees and a gun store totaled that first night. i drive to and from work with denuded hills to my east a constant reminder. but that’s as it should be. even if we sometimes need respite from those reminders–people talk about taking new routes so they won’t see scorched hills and devastated neighborhoods–we cannot afford to forget. sandy and i have moved around the country in our years together, and we have seen clues everywhere that things are changing and we need to take heed. people like to lapse into the old normal, but it is not in our best interests to do so. all of our stories are different. but we share a collective loss of innocence, and we can never return to where we were. we can only move forward, changed by the fire, changed forever. bookmark to: filed in santa rosa living | | comments off on what burns away neutrality is anything but saturday, august , “we watch people dragged away and sucker-punched at rallies as they clumsily try to be an early-warning system for what they fear lies ahead.” — unwittingly prophetic me, march, . sheet cake photo by flickr user glane . cc by . sometime after last november, i realized something very strange was happening with my clothes. my slacks had suddenly shrunk, even if i hadn’t washed them. after months of struggling to keep myself buttoned into my clothes, i gave up and purchased slacks and jeans one size larger. i call them my t***p pants. this post is about two things. it is about the lessons librarians are learning in this frightening era about the nuances and qualifications shadowing our deepest core values–an era so scary that quite a few of us, as tina fey observed, have acquired t***p pants. and it’s also some advice, take it or leave it, on how to “be” in this era. i suspect many librarians have had the same thoughts i have been sharing with a close circle of colleagues. most librarians take pride in our commitment to free speech. we see ourselves as open to all viewpoints. but in today’s new normal, we have seen that even we have limits. this week, the acrl board of directors put out a statement condemning the violence in charlottesville. that was the easy part. the board then stated, “acrl is unwavering in its long-standing commitment to free exchange of different viewpoints, but what happened in charlottesville was not that; instead, it was terrorism masquerading as free expression.” you can look at what happened in charlottesville and say there was violence “from many sides,” some of it committed by “very fine people” who just happen to be nazis surrounded by their own private militia of heavily-armed white nationalists. or you can look at charlottesville and see terrorism masquerading as free expression, where triumphant hordes descended upon a small university town under the guise of protecting some lame-ass statue of an american traitor, erected sixty years after the end of the civil war, not coincidentally during a very busy era for the klan. decent people know the real reason the nazis were in charlottesville: to tell us they are empowered and emboldened by our highest elected leader. there is no middle ground. you can’t look at charlottesville and see everyday people innocently exercising first amendment rights. as i and many others have argued for some time now, libraries are not neutral.  barbara fister argues, “we stand for both intellectual freedom and against bigotry and hate, which means some freedoms are not countenanced.” she goes on to observe, “we don’t have all the answers, but some answers are wrong.” it goes to say that if some answers are wrong, so are some actions. in these extraordinary times, i found myself for the first time ever thinking the aclu had gone too far; that there is a difference between an unpopular stand, and a stand that is morally unjustifiable. so i was relieved when the national aclu concurred with its three northern california chapters that “if white supremacists march into our towns armed to the teeth and with the intent to harm people, they are not engaging in activity protected by the united states constitution. the first amendment should never be used as a shield or sword to justify violence.” but i was also sad, because once again, our innocence has been punctured and our values qualified. every asterisk we put after “free speech” is painful. it may be necessary and important pain, but it is painful all the same. many librarians are big-hearted people who like to think that our doors are open to everyone and that all viewpoints are welcome, and that enough good ideas, applied frequently, will change people. and that is actually very true, in many cases, and if i didn’t think it was true i would conclude i was in the wrong profession. but we can’t change people who don’t want to be changed. listen to this edition of the daily, a podcast from the new york times, where american fascists plan their activities. these are not people who are open to reason. as david lankes wrote, “there are times when a community must face the fact that parts of that community are simply antithetical to the ultimate mission of a library.” we urgently need to be as one voice as a profession around these issues. i was around for–was part of–the “filtering wars” of the s, when libraries grappled with the implications of the internet bringing all kinds of content into libraries, which also challenged our core values. when you’re hand-selecting the materials you share with your users, you can pretend you’re open to all points of view. the internet challenged that pretense, and we struggled and fought, and were sometimes divided by opportunistic outsiders. we are fortunate to have strong ala leadership this year. the ala board and president came up swinging on tuesday with an excellent presser that stated unequivocally that “the vile and racist actions and messages of the white supremacist and neo-nazi groups in charlottesville are in stark opposition to the ala’s core values,” a statement that (in the tradition of ensuring chapters speak first) followed a strong statement from our virginia state association.  arl also chimed in with a stemwinder of a statement.  i’m sure we’ll see more. but ala’s statement also describes the mammoth horns of the library dilemma. as i wrote colleagues, “my problem is i want to say i believe in free speech and yet every cell in my body resists the idea that we publicly support white supremacy by giving it space in our meeting rooms.” if you are in a library institution that has very little likelihood of exposure to this or similar crises, the answers can seem easy, and our work appears done. but for more vulnerable libraries, it is crucial that we are ready to speak with one voice, and that we be there for those libraries when they need us. how we get there is the big question. i opened this post with an anecdote about my t***p pants, and i’ll wrap it up with a concern. it is so easy on social media to leap in to condemn, criticize, and pick apart ideas. take this white guy, in an internet rag, the week after the election, chastising people for not doing enough.  you know what’s not enough? sitting on twitter bitching about other people not doing enough. this week, siva vaidhyanathan posted a spirited defense of a tina fey skit where she addressed the stress and anxiety of these political times.  siva is in the center of the storm, which gives him the authority to state an opinion about a sketch about charlottesville. i thought fey’s skit was insightful on many fronts. it addressed the humming anxiety women have felt since last november (if not earlier). it was–repeatedly–slyly critical of inaction: “love is love, colin.” it even had a ru paul joke. a lot of people thought it was funny, but then the usual critics came out to call it naive, racist, un-funny, un-woke, advocating passivity, whatever. we are in volatile times, and there are provocateurs from outside, but also from inside. think. breathe. step away from the keyboard. take a walk. get to know the mute button in twitter and the unfollow feature in facebook. pull yourself together and think about what you’re reading, and what you’re planning to say. interrogate your thinking, your motives, your reactions. i’ve read posts by librarians deriding their peers for creating subject guides on charlottesville, saying instead we should be punching nazis. get a grip. first off, in real life, that scenario is unlikely to transpire. you, buried in that back cubicle in that library department, behind three layers of doors, are not encountering a nazi any time soon, and if you did, i recommend fleeing, because that wackdoodle is likely accompanied by a trigger-happy militiaman carrying a loaded gun. (there is an entire discussion to be had about whether violence to violence is the politically astute response, but that’s for another day.) second, most librarians understand that their everyday responses to what is going on in the world are not in and of themselves going to defeat the rise of fascism in america. but we are information specialists and it’s totally wonderful and cool to respond to our modern crisis with information, and we need to be supportive and not go immediately into how we are all failing the world. give people a positive framework for more action, not scoldings for not doing enough. in any volatile situation, we need to slow the eff down and ask how we’re being manipulated and to what end; that is a lesson the aclu just learned the hard way. my colleague michael stephens is known for saying, “speak with a human voice.” i love his advice, and i would add, make it the best human voice you have. we need one another, more than we know.   bookmark to: filed in intellectual freedom, librarianship | | comments ( ) mpow in the here and now sunday, april , sometimes we have monsters and ufos, but for the most part it’s a great place to work i have coined a few biblioneologisms in my day, but the one that has had the longest legs is mpow (my place of work), a convenient, mildly-masking shorthand for one’s institution. for the last four years i haven’t had the bandwidth to coin neologisms, let alone write about mpow*. this silence could be misconstrued. i love what i do, and i love where i am. i work with a great team on a beautiful campus for a university that is undergoing a lot of good change. we are just wrapping up the first phase of a visioning project to help our large, well-lit building serve its communities well for the decades to come. we’re getting ready to join the other csu libraries on onesearch, our first-ever unified library management system. we have brought on some great hires, thrown some great events (the last one featured four black panthers talking about their life work — wow!). with a new dean (me) and a changing workforce, we are developing our own personality. it’s all good… and getting better the library was doing well when i arrived, so my job was to revitalize and switch it up. as noted in one of the few posts about mpow, the libraries in my system were undergoing their own reassessment, and that has absorbed a fair amount of our attention, but we continue to move forward. sometimes it’s the little things. you may recall i am unreasonably proud of the automated table of contents i generated for my dissertation, and i also feel that way about mpow’s slatwall book displays, which in ten areas beautifully market new materials in spaces once occupied by prison-industry bookcases or ugly carpet and unused phones (what were the phones for? perhaps we will never know). the slatwall was a small project that was a combination of expertise i brought from other libraries, good teamwork at mpow, and knowing folks. the central problem was answered quickly by an email to a colleague in my doctoral program (hi, cindy!) who manages public libraries where i saw the displays i thought would be a good fit. the team selected the locations, a staff member with an eye for design recommended the color, everyone loves it, and the books fly off the shelves. if there is any complaining, it is that we need more slatwall. installed slatwall needs to wait until we know if we are moving/removing walls as part of our building improvements. a bigger holdup is that we need to hire an access services manager, and really, anything related to collections needs the insight of a collections librarian. people… who need people… but we had failed searches for both these positions… in the case of collections, twice. *cue mournful music* we have filled other positions with great people now doing great things, and are on track to fill more positions, but these two, replacing people who have retired, are frustrating us. the access services position is a managerial role, and the collections librarian is a tenure-track position. both offer a lot of opportunity. we are relaunching both searches very soon (i’ll post a brief update when that happens), and here’s my pitch. if you think you might qualify for either position, please apply. give yourself the benefit of the doubt. if you know someone who would be a good fit for either position, ask them to apply. i recently mentored someone who was worried about applying to a position. “will that library hold it against me if i am not qualified?” the answer is of course not!  (and if they do, well, you dodged that bullet!) i have watched far too many people self-select out of positions they were qualified for (hrrrrmmmm particularly one gender…). qualification means expertise + capacity + potential. we expect this to be a bit of a stretch to you. if a job is really good, most days will have a “fake it til you make it” quality. this is also not a “sink or swim” institution. if it ever was, those days are in the dim past, long before i arrived. the climate is positive. people do great things and we do our best to support them. i see our collective responsibility as an organization as to help one another succeed. never mind me and my preoccupation with slatwall (think of it as something to keep the dean busy and happy, like a baby with a binky). we are a great team, a great library, on a great campus, and we’re a change-friendly group with a minimum of organizational issues, and i mean it. i have worked enough places to put my hand on a bible and swear to that. it has typical organizational challenges, and it’s a work in progress… as are we all. the area is crazily expensive, but it’s also really beautiful and so convenient for any lifestyle. you like city? we got city. you like suburb, or ocean, or mountain, or lake? we got that! anyway, that’s where i am with mpow: i’m happy enough, and confident enough, to use this blog post to beg you oh please help us fill these positions. the people who join us will be glad you did. ### *   sidebar: the real hilarity of coining neologisms is that quite often someone, generally of a gender i do not identify with, will heatedly object to the term, as happened in when i coined the term biblioblogosphere. then, as i noted in that post from , others will defend it. that leads me to believe that creating new words is the linguistic version of lifting one’s hind leg on a tree. bookmark to: filed in uncategorized | | comments ( ) questions i have been asked about doctoral programs wednesday, march , about six months ago i was visiting another institution when someone said to me, “oh, i used to read your blog, back in the day.” ah yes, back in the day, that pleistocene era when i wasn’t working on a phd while holding down a big job and dealing with the rest of life’s shenanigans. so now the phd is done–i watched my committee sign the signature page, two copies of it, even, before we broke out the champers and celebrated–and here i am again. not blogging every day, as i did once upon a time, but still freer to put virtual pen to electronic paper as the spirit moves me. i have a lot to catch up on–for example, i understand there was an election last fall, and i hear it may not have gone my way–but the first order of business is to address the questions i have had from library folk interested in doctoral programs. note that my advice is not directed at librarians whose goal is to become faculty in lis programs. dropping back in one popular question comes from people who had dropped out of doctoral programs. could they ever be accepted into a program again? i’m proof there is a patron saint for second chances. i spent one semester in a doctoral program in and dropped out for a variety of reasons–wrong time, wrong place, too many life events happening. at the time, i felt that dropping out was the academic equivalent of you’ll never eat lunch in this town again, but part of higher education is a series of head games, and that was one of them. the second time around, i had a much clearer idea of what i wanted from a program and what kind of program would work for me, and i had the confluence of good timing and good luck. the advice tom galvin gave me in , when sandy and i were living in albany and when tom–a longtime ala activist and former ala exec director–was teaching at suny albany, still seems sound: you can drop out of one program and still find your path back to a doctorate, just don’t drop out of two programs. i also have friends who suffered through a semester or two, then decided it wasn’t for them. when i started the program, i remember thinking “i need this ph.d. because i could never get a job at, for example, x without it.” then i watched as someone quite accomplished, with no interest in ever pursuing even a second masters, was hired at x. there is no shame in deciding the cost/benefit analysis isn’t there for you–though i learned, through this experience, that i was in the program for other, more sustainable reasons. selecting your program i am also asked what program to attend. to that my answer is, unless you are very young and can afford to go into, and hopefully out of, significant amounts of debt, pick the program that is most affordable and allows you to continue working as a professional (though if you are at a point in life when you can afford to take a couple years off and get ‘er done, more power to you). that could be a degree offered by your institution or in cooperation with another institution, or otherwise at least partially subsidized. i remember pointing out to an astonished colleague that the ed.d. he earned for free (plus many saturdays of sweat equity) was easily worth $ , , based on the tuition rate at his institution. speaking of which, i get asked about ph.d. versus ed.d. this can be a touchy question. my take: follow the most practical and affordable path available to you that gets you the degree you will be satisfied with and that will be the most useful to you in your career. but whether ed.d. or ph.d., it’s still more letters after your name than you had before you started. where does it hurt? what’s the hardest part of a doctoral program? for me, that was a two-way tie between the semester coursework and the comprehensive exams. the semester work was challenging because it couldn’t be set aside or compartmentalized. the five-day intensives were really seven days for me as i had to fly from the left coast to boston. the coursework had deadlines that couldn’t be put aside during inevitable crises. the second semester was the hardest, for so many reasons, not the least of which is that once i had burned off the initial adrenaline, the finish line seemed impossibly far away; meanwhile, the tedium of balancing school and work was settling in, and i was floundering in alien subjects i was struggling to learn long-distance. don’t get me wrong, the coursework was often excellent: managing in a political environment, strategic finance, human resources, and other very practical and interesting topics. but it was a bucket o’ work, and when i called a colleague with a question about chair manufacturers (as one does) and heard she was mired in her second semester, i immediately informed her this too shall pass. ah, the comprehensive exams. i would say i shall remember them always, except they destroyed so much of my frontal lobe, that will not be possible. the comps required memorizing piles of citations–authors and years, with salient points–to regurgitate during two four-hour closed-book tests.  i told myself afterwards that the comps helped me synthesize major concepts in grand theory, which is a dubious claim but at least made me feel better about the ordeal. a number of students in my program helped me with comps. my favorite memory is of colleague gary shaffer, who called me from what sounded like a windswept city corner to offer his advice. i kept hearing this crinkling sound. the crinkling became louder. “always have your cards with you,” gary said. he had brought a sound prop: the bag of index cards he used to constantly drill himself. i committed myself to continuous study until done, helped by partnering with my colleague chuck in long-distance comps prep. we didn’t study together, but we compared timelines and kept one another apprised of our progress. you can survive a doctoral program without a study buddy, but whew, is it easier if you have one. comps were an area where i started with old tech–good old paper index cards–and then asked myself, is this how it’s done these days? after research, i moved on to electronic flashcards through quizlet. when i wasn’t flipping through text cards on my phone, ipad, or computer, i was listening to the cards on my phone during my run or while driving around running errands. writing != not writing so about that dissertation. it was a humongous amount of work, but the qualifying paper that preceded it and the coursework and instruction in producing dissertation-quality research gave me the research design skills i needed to pull it off. once i had the data gathered, it was just a lot of writing. this, i can do. not everyone can. writing is two things (well, writing is many things, but we’ll stick with two for now): it is a skill, and it is a discipline. if you do not have those two things, writing will be a third thing: impossible. here is my method. it’s simple. you schedule yourself, you show up, and you write. you do not talk about how you are going to write, unless you are actually going to write. you do not tweet that you are writing (because then you are tweeting, not writing). you do not do other things and feel guilty because you are not writing. (if you do other things, embrace them fully.) i would write write write write write, at the same chair at the same desk (really, a costco folding table) facing the same wall with the same prompts secured to the wall with painter’s tape that on warm days would loosen, requiring me to crawl under my “desk” to retrieve the scattered papers, which on many days was pretty much my only form of exercise. then i would write write write write write some more, on weekends, holiday breaks, and the occasional “dissercation day,” as i referred to vacation days set aside for this purpose. dissercation days had the added value that  i was very conscious i was using vacation time to write, so i didn’t procrastinate–though in general i find procrastinating at my desk a poor use of time; if i’m going to procrastinate, let me at least get some fresh air. people will advise you when and how to write. a couple weekends ago i was rereading stephen king’s on writing–now that i can read real books again–in which king recommends writing every day. if that works for you, great. what worked for me was using weekends, holidays, or vacation days; writing early in the day, often starting as early as am; taking a short exercise break or powering through until mid-afternoon; and then stopping no later than pm, many times more like pm if i hadn’t stopped by then. when i tried to write on weekday mornings, work would distract me. not actual tasks, but the thought of work. it would creep into my brain and then i would feel the urgent need to see if the building consultant had replied to my email or if i had the agenda ready for the program and marketing meeting. it also takes me about an hour to get into a writing groove, so by the time the words were flowing it was time to get ready for work. as for evenings, a friend of mine observed that i’m a lark, not an owl. the muse flees me by mid-afternoon. (this also meant i saved the more chore-like tasks of writing for the afternoon.) the key is to find your own groove and stick to it. if your groove isn’t working, maybe it’s not your groove after all. do not take off too much time between writing sessions. i had to do that a couple of times for six to eight weeks each time, during life events such as household moves and so on, and it took some revisiting to reacquaint myself with my writing (which was stephen king’s main, and excellent, point in his recommendation to write daily). even when i was writing on a regular basis i often spent at least an hour at the start of the weekend rereading my writing from page to ensure that my most recent writing had a coherent flow of reasoning and narrative and that the writing for that day would be its logical descendant. another universal piece of advice is to turn off the technology. i see people tweeting “i’m writing my dissertation right now” and i think, no you aren’t. i used a mac app called howler timer to give me writing sieges of , , , or minutes, depending on my degree of focus for that day, during which all interruptions–email, facebook, twitter, etc.–were turned off. twitter and facebook became snack breaks, though i timed those snacks as well. i had favorite pandora stations to keep me company and drown out ambient noise, and many, many cups of herbal tea. technology will save us all a few technical notes about technology and doctoral programs. with the exception of the constant allure of social networks and work email, it’s a good thing. i used kahn academy and online flash cards to study for the math portion of the gre.  as noted earlier, i used quizlet for my comps, in part because this very inexpensive program not only allowed me to create digital flashcards but also read them aloud to me on my iphone while i exercised or ran errands. i conducted interviews using facetime with an inexpensive plug-in, call recorder, that effortlessly produced digital recordings, from which the audio files could be easily split out. i then emailed the audio files to valerie, my transcriptionist, who lives several thousand miles away but always felt as if she were in the next room, swiftly and flawlessly producing transcripts. i used dedoose, a cloud-based analytical product, to mark up the narratives, and with the justifiable paranoia of any doctoral student, exported the output to multiple secure online locations. i dimly recall life before such technology, but cannot fathom operating in such a world again, or how much longer some of the tasks would have taken.  i spent some solid coin on things like paying a transcriptionist, but when i watch friends struggling to transcribe their own recordings, i have no regrets. there are parts of my dissertation i am exceptionally proud of, but i admit particular pride for my automatically-generated table of contents, just one of many skills i learned through youtube (spoiler alert: the challenge is not marking up the text, it’s changing the styles to match your requirements. word could really use a style set called just times roman please). and of course, there were various library catalogs and databases, and hundreds of e-journals to plumb, activity i accomplished as far away from your typical “library discovery layer” as possible. you can take google scholar away from me when you pry it from my cold, dead hands. i also plowed through a lot of print books, and many times had to do backflips to get the book in that format. journal articles work great in e-format (though i do have a leaning paper pillar of printed journal articles left over from comps review and classes). books, not so much. i needed to have five to fifteen books simultaneously open during a writing session, something ebooks are lame at.  i don’t get romantic about the smell of paper blah blah blah, but when i’m writing, i need my tools in the most immediately accessible format possible, and for me that is digital for articles and paper for books. nothing succeeds like success your cohort can be very important,  and indeed i remember all of them with fondness but one with particular gratitude. nevertheless, you alone will cross the finish line. i was unnerved when one member of our cohort dropped out after the first semester, but i shouldn’t have been. doctoral student attrition happens throughout the academy, no less so in libraryland. like the military, or marriage, you really have no idea what it’s like until you’re in it, and it’s not for everyone. it should be noted that the program i graduated from has graduated, or will graduate, nearly all of the students who made it past the first two semesters, which in turn is most of the people who entered the program in its short but glorious life–another question you should investigate while looking at programs. it turned out that for a variety of reasons that made sense, the cohort i was in was the last for this particular doctoral program. that added a certain pressure since each class was the last one to ever be offered, but it also encouraged me to keep my eyes on the prize. i also, very significantly, had a very supportive committee, and most critically, i fully believed they wanted me to succeed. i also had a very supportive spouse, with whom i racked up an infinity of backlogged honey-dos and i-owe-you-for-this promises. regarding success and failure, at the beginning of the program, i asked if anyone had ever failed out of the program. the answer was no, everyone who left self-selected. i later asked the same question regarding comps: had anyone failed comps? the answer was that a student or two had retaken a section of comps in order to pass, but no one had completely failed (and you got one do-over if that happened). these were crucial questions for me. it also helped me to reflect on students who had bigger jobs, or were also raising kids, or otherwise were generally worse off than me in the distraction department. if so-and-so, with the big ivy league job, or so-and-so, with the tiny infant, could do it, couldn’t i? (there is a fallacy inherent here that more prestigious schools are harder to administer, but it is a fallacy that comforted me many a day.) onward i am asked what i will “do” with my ph.d. in higher education, a doctorate is the expected degree for administrators, and indeed, the news of my successful doctoral defense was met with comments such as “welcome to the club.” so, mission accomplished. also, i have a job i love, but having better marketability is never a bad idea, particularly in a political moment that can best be described as volatile and unpredictable. i can consult. i can teach (yes, i already could teach, but now more fancy-pants). i could make a reservation at a swanky bistro under the name dr. oatmeal and only half of that would be a fabrication. the world is my oyster! frankly, i did not enter the program with the idea that i would gain skills and develop the ability to conduct doctoral-quality research (i was really shooting for the fancy six-sided tam), but that happened and i am pondering what to do with this expertise. i already have the joy of being pedantic, if only quietly to myself. don’t tell me you are writing a “case study” unless it has the elements of a case study not to mention the components of any true research design. otherwise it’s just anecdata. and of course, when it comes to owning the area of lgtbq leadership in higher education, i am totally m.c. hammer: u can’t touch this! i would not mind being part of the solution for addressing the dubious quality of so much lis “research.” libraryland needs more programs such as the institute for research design in librarianship to address the sorry fact that basic knowledge of the fundamentals of producing industry-appropriate research is in most cases not required for a masters degree in library science, which at least for academic librarianship, given the student learning objectives we claim to support, is absurd. i also want to write a book, probably continuing the work i have been doing with documenting the working experiences of lgbtq librarians. but first i need to sort and purge my home office, revisit places such as hogwarts and narnia, and catch up on some of those honey-dos and i-owe-you-for-this promises. and buy a six-sided tam. bookmark to: filed in uncategorized | | comments ( ) a scholar’s pool of tears, part : the pre in preprint means not done yet tuesday, january , note, for two more days, january and , you (as in all of you) have free access to my article, to be real: antecedents and consequences of sexual identity disclosure by academic library directors. then it drops behind a paywall and sits there for a year. when i wrote part of this blog post in late september, i had keen ambitions of concluding this two-part series by discussing “the intricacies of navigating the liminal world of oa that is not born oa; the oa advocacy happening in my world; and the implications of the publishing environment scholars now work in.” since then, the world, and my priorities have changed. my goals are to prevent nuclear winter and lead our library to its first significant building upgrades since it opened close to years ago. but at some point i said on twitter, in response to a conversation about posting preprints, that i would explain why i won’t post a preprint of to be real. and the answer is very simple: because what qualifies as a preprint for elsevier is a draft of the final product that presents my writing before i incorporated significant stylistic guidance from the second reviewer, and that’s not a version of the article i want people to read. in the pre-elsevier draft, as noted before, my research is present, but it is overshadowed by clumsy style decisions that reviewer presented far more politely than the following summary suggests: quotations that were too brief; rushing into the next thought without adequately closing out the previous thought; failure to loop back to link the literature review to the discussion; overlooking a chance to address the underlying meaning of this research; and a boggy conclusion. a crucial piece of advice from reviewer was to use pseudonyms or labels to make the participants more real. all of this advice led to a final product, the one i have chosen to show the world. that’s really all there is to it. it would be better for the world if my article were in an open access publication, but regardless of where it is published, i as the author choose to share what i know is my best work, not my work in progress. the oa world–all sides of it, including those arguing against oa–has some loud, confident voices with plenty of “shoulds,” such as the guy (and so many loud oa voices are male) who on a discussion list excoriated an author who was selling self-published books on amazon by saying “people who value open access should praise those scholars who do and scorn those scholars who don’t.” there’s an encouraging appproach! then there are the loud voices announcing the death of oa when a journal’s submissions drop, followed by the people who declare all repositories are potemkin villages, and let’s not forget the fellow who curates a directory of predatory oa journals that is routinely cited as an example of what’s wrong with scholarly publishing. i keep saying, the scholarly-industrial complex is broken. i’m beyond proud that the council of library deans for the california state university–my peers–voted to encourage and advocate for open access publishing in the csu system. i’m also excited that my library has its first scholarly communications librarian who is going to bat on open access and open educational resources and all other things open–a position that in consultation with the library faculty i prioritized as our first hire in a series of retirement/moving-on faculty hires. but none of that translates to sharing work i consider unfinished. we need to fix things in scholarly publishing and there is no easy, or single, path. and there are many other things happening in the world right now. i respect every author’s decision about what they will share with the world and when and how they will share it. as for my decision–you have it here. bookmark to: filed in uncategorized | | comments off on a scholar’s pool of tears, part : the pre in preprint means not done yet ‹ older posts search for: recto and verso about free range librarian comment guidelines writing: clips & samples you were saying… k.g. schneider on i have measured out my life in doodle polls thomas dowling on i have measured out my life in doodle polls chad on i have measured out my life in doodle polls dale mcneill on i have measured out my life in doodle polls walter underwood on an old-skool blog post recent posts (dis)association i have measured out my life in doodle polls memento dmv an old-skool blog post keeping council browse by month browse by month select month may  ( ) april  ( ) march  ( ) january  ( ) november  ( ) august  ( ) april  ( ) march  ( ) january  ( ) november  ( ) september  ( ) june  ( ) march  ( ) january  ( ) september  ( ) august  ( ) july  ( ) april  ( ) march  ( ) january  ( ) october  ( ) september  ( ) august  ( ) june  ( ) april  ( ) february  ( ) january  ( ) december  ( ) october  ( ) august  ( ) july  ( ) june  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) july  ( ) categories categoriesselect category american liberry ass’n annual lists another library blog association nerd bad entry titles best of frl blog problems blogging blogging and ethics blogging and rss blogher blogs and journalism blogs worth reading book reviews business . california dreamin’ car shopping cats who blog cla shenanigans conferences congrunts cooking creative nonfiction cuba customer service digital divide issues digital preservation ebooks essays from the mp ethics evergreen ils family values five minute reviews flickr fun flori-duh friends frl blogroll additions frl penalty box frl spotlight reviews gardening whatnots gay rights gender and librarianship get a grip get real! god’s grammar google gormangate hire edukayshun homebrewing homosexual agenda hot tech intellectual freedom intellectual property ipukarea katrina and libraries kudos and woo-hoos lastentries leadership lgbt librarianship librarian wisdom librarianship library . library journal librewians lies damn lies life linkalicious lita councilor memes mfa-o-rama military life movable type movie reviews mpow mpow wishlist must-read blogs nasig paper next gen catalog online learning onomies open access open data open source software our world people sitings podcasts politics postalicious prayer circle product reviews reading recipes recto and verso regency zombies regular issues religion rss-alicious santa rosa living search search standards schmandards tagging talks and tours tallahassee dining tallahassee living tanstaafl test entries the big o this and that top tech trends travel schmavel treo time twitterprose two minutes hate uncategorized upcoming gigs uppity wimmin vast stupidity war no more wogrofubico women wordpress writing writing for the web ye olde tech tags ala a mile down bacabi bush castro cil cloud tests cnf creative nonfiction crowdvine david vann defragcon defragcon defrag defragcon defragcon defrag shootingonesownfoot defragcon defragcon defrag digital preservation email environment essays flaming homophobia gay gay rights glbt harvey milk homebrew hybrid iasummit iasummit idjits journals keating lockss mccain mea culpas mullets naked emperors obama ready fire aim san francisco silly tags swift tag clouds tagging vala-caval wogrofubico writing scribbly stuff log in entries feed comments feed wordpress.org © k.g. schneider ¶ thanks, wordpress. ¶ veryplaintxt theme by scott allan wallick. ¶ it's nice xhtml & css. none community | my site copy   skip to main content the justice programme a project of open knowledge foundation working to ensure public impact algorithms do no harm what we do training advocacy strategic litigation community public impact algorithms about who we work with press team more use tab to navigate through the menu items. contact us at justice@okfn.org calling all legal practitioners. you are invited to an interactive workshop on immigration and automation what we do training advocacy strategic litigation community about community ​ we believe that the best way to achieve change is for people to work together.    that is why building community is at the heart of our mission.  ​ as the number of lawyers, policy makers and activists who work with public impact algorithms grows - we need to share our experiences.  ​ it is only by doing this that we can understand what mechanisms work.   we must work together to build the ecosystem of regulation, practices and actors that we need to ensure that public impact algorithms do no harm.  join the _justice_ _programme_ _community_ _meetups_ do you want to learn more about pias, what they are, how to spot them, and how they may affect your clients? ​ join us for series of free, monthly community meetups to talk about public impact algorithms. when ? lunch time every second thursday of the month  how ? register your interest here sign up for our newsletter here ​ the justice programme is a project of open knowledge foundation   none none none jodischneider.com/blog jodischneider.com/blog reading, technology, stray thoughts paid graduate hourly research position at uiuc for spring jodi schneider&# ;s information quality lab (http://infoqualitylab.org) seeks a graduate hourly student for a research project on bias in citation networks. biased citation benefits authors in the short-term by bolstering grants and papers, making them more easily accepted. however, it can have severe negative consequences for scientific inquiry. our goal is to find quantitative measures of [&# ;] avoiding long-haul air travel during the covid- pandemic i would not recommend long-haul air travel at this time. an epidemiological study of a . hour flight from the middle east to ireland concluded that groups ( people), traveling from continents in four groups, who used separate airport lounges, were likely infected in flight. the flight had % occupancy ( passengers/ seats; [&# ;] paid undergraduate research position at uiuc for fall & spring university of illinois undergraduates are encouraged to apply for a position in my lab. i particularly welcome applications from students in the new ischool bs/is degree or in the university-wide informatics minor. while i only have paid position open, i also supervise unpaid independent study projects. dr.&# ;jodi&# ;schneider and the information quality lab &# ;https://infoqualitylab.org&# ; seek [&# ;] #shutdownstem #strike blacklives #shutdownacademia i greatly appreciated receiving messages from senior people about their participation in the june th #shutdownstem #strike blacklives #shutdownacademia. in that spirit, i am sharing my email bounce message for tomorrow, and the message i sent to my research lab. email bounce: i am not available by email today:&# ;this june th is a day of action [&# ;] qotd: storytelling in protest and politics i recently read francesca polletta&# ;s book it was like a fever: storytelling in protest and politics ( , university of chicago press). i recommend it! it will appeal to researchers interested in topics such as&# ;narrative, strategic communication, (narrative) argumentation, or&# ;epistemology (here, of narrative). parts may also interest activists. the book&# ;s case studies are drawn from the [&# ;] knowledge graphs: an aggregation of definitions i am not aware of a consensus definition of knowledge graph. i&# ;ve been discussing this for awhile with liliana giusti serra, and the topic came up again with my fellow organizers of the knowledge graph session at us ts as we prepare for a panel. i&# ;ve proposed the following main features: rdf-compatible, has a defined schema (usually an [&# ;] qotd: doing more requires thinking less by the aid of symbolism, we can make transitions in reasoning almost mechanically by the eye which would otherwise call into play the higher faculties of the brain. &# ;civilization advances by extending the number of important operations that we can perform without thinking about them. operations of thought are like cavalry charges in a battle [&# ;] qotd: sally jackson on how disagreement makes arguments more explicit sally jackson explicates the notion of the &# ;disagreement space&# ; in a new topoi article: &# ;a position that remains in doubt remains in need of defense&# ; &# ; &# ;the most important theoretical consequence of seeing argumentation as a system for management of disagreement is a reversal of perspective on what arguments accomplish. are arguments the means by [&# ;] qotd: working out scientific insights on paper, lavoisier case study &# ;language does do much of our thinking for us, even in the sciences, and rather than being an unfortunate contamination, its influence has been productive historically, helping individual thinkers generate concepts and theories that can then be put to the test. the case made here for the constitutive power of figures [of speech] per se [&# ;] david liebovitz: achieving care transformation by infusing electronic health records with wisdom today i am at the health data analytics summit. the title of the keynote talk is achieving care transformation by infusing electronic health records with wisdom. it&# ;s a delight to hear from a medical informaticist: david m. liebovitz (publications in google scholar), md, facp, chief medical information officer, the university of chicago. he graduated from [&# ;] none none none none none none none access: not just wires. by karen coyle kcoyle.net : home  contact info  search topics: copyright  technology  libraries  privacy  more...  access: not just wires date posted: march , copyright karen coyle, by karen coyle university of california, library automation *** and *** computer professionals for social responsibility/berkeley chapter this is the written version of a talk given at the cpsr annual meeting in san diego, ca, on oct. . i have to admit that i'm really sick and tired of the information highway. i feel like i've already heard so much about it that it must be come and gone already, yet there is no sign of it. this is truly a piece of federal vaporware. i am a librarian, and it's especially strange to have dedicated much of your life to the careful tending of our current information infrastructure, our libraries, only to wake up one morning to find that the entire economy of the nation depends on making information commercially viable. there's an element of twilight zone about this because libraries are probably our most underfunded and underappreciated of institutions, with the possible exception of day care centers. it's clear to me that the information highway isn't much about information. it's about trying to find a new basis for our economy. i'm pretty sure i'm not going to like the way information is treated in that economy. we know what kind of information sells, and what doesn't. so i see our future as being a mix of highly expensive economic reports and cheap online versions of the national inquirer. not a pretty picture. this is a panel on "access." but i am not going to talk about access from the usual point of view of physical or electronic access to the futurenet. instead i am going to talk about intellectual access to materials and the quality of our information infrastructure, with the emphasis on "information." information is a social good and part of our "social responsibility" is that we must take this resource seriously. from the early days of our being a species with consciousness of its own history, some part of society has had the role of preserving this history: priests, learned scholars, archivists. information was valued; valued enough to be denied to some members of society; to be part of the ritual of belonging to an elite. so i find it particularly puzzling that as move into this new "information age" that our efforts are focused on the machinery of the information system, while the electronic information itself is being treated like just so much more flotsam and jetsam; this is not a democratization of information, but a devaluation of information. on the internet, many electronic information sources that we are declaring worthy of "universal access" are administered by part-time volunteers; graduate students who do eventually graduate, or network hobbyists. resources come and go without notice, or languish after an initial effort and rapidly become out of date. few network information resources have specific and reliable funding for the future. as a telecommunications system the internet is both modern and mature; as an information system the internet is an amateur operation. commercial information resources, of course, are only interested in information that provides revenue. this immediately eliminates the entire cultural heritage of poetry, playwriting, and theological thought, among others. if we value our intellectual heritage, and if we truly believe that access to information (and that broader concept, knowledge) is a valid social goal, we have to take our information resources seriously. now i know that libraries aren't perfect institutions. they tend to be somewhat slow-moving and conservative in their embrace of new technologies; and some seem more bent on hoarding than disseminating information. but what we call "modern librarianship" has over a century of experience in being the tender of this society's information resources. and in the process of developing and managing that resource, the library profession has understood its responsibilities in both a social and historical context. drawing on that experience, i am going to give you a short lesson on social responsibilities in an information society. here are some of our social responsibilities in relation to information: collection selection preservation organization dissemination collection it is not enough to passively gather in whatever information comes your way, like a spider waiting on its web. information collection is an activity, and an intelligent activity. it is important to collect and collocate information units that support, complement and even contradict each other. a collection has a purpose and a context; it says something about the information and it says something about the gatherer of that information. it is not random, because information itself is not random, and humans do not produce information in a random fashion. too many internet sites today are a terrible hodge-podge, with little intellectual purpose behind their holdings. it isn't surprising that visitors to these sites have a hard time seeing the value of the information contained therein. commercial systems, on the other hand, have no incentive to provide an intellectual balance that might "confuse" its user. in all of the many papers that have come out of discussion of the national information infrastructure, it is interesting that there is no mention of collecting information: there is no library of congress or national archive of the electronic information world. so in the whole elaborate scheme, no one is responsbile for the collection of information. selection not all information is equal. this doesn't mean that some of it should be thrown away, though inevitably there is some waste in the information world. and this is not in support of censorship. but there's a difference between a piece on nuclear physics by a nobel laureate and a physics diorama entered into a science fair by an -year-old. and there's a difference between alpha release . and beta . of a software package. if we can't differentiate between these, our intellectual future looks grim indeed. certain sources become known for their general reliability, their timeliness, etc. we have to make these judgments because the sheer quantity of information is too large for us to spend our time with lesser works when we haven't yet encountered the greats. this kind of selection needs to be done with an understanding of a discipline and understanding of the users of a body of knowledge. the process of selection overlaps with our concept of education, where members of our society are directed to a particular body of knowledge that we hold to be key to our understanding of the world. preservation how much of what is on the net today will exist in any form ten years from now? and can we put any measure to what we lose if we do not preserve things systematically? if we can't preserve it all, at least in one safely archived copy, are we going to make decisions about preservation, or will we leave it up to a kind of information darwinianism? as we know, the true value of some information may not be immediately known, and some ideas gain in value over time. the commercial world, of course, will preserve only that which sells best. organization this is an area where the current net has some of its most visible problems, as we have all struggled through myriad gopher menus, ftp sites, and web pages looking for something that we know is there but cannot find. there is no ideal organization of information, but no organization is no ideal either. the organization that exists today in terms of finding tools is an attempt to impose order over an unorganized body. the human mind in its information seeking behavior is a much more complex question than can be answered with a keyword search in an unorganized information universe. when we were limited to card catalogs and the placement of physical items on shelves, we essentially had to choose only one way to organize our information. computer systems should allow us to create a multiplicity of organization schemes for the same information, from traditional classification, that relies on hierarchies and categories, to faceted schemes, relevance ranking and feedback, etc. unfortunately, documents do not define themselves. the idea of doing wais-type keyword searching on the vast store of textual documents on the internet is a folly. years of study of term frequency, co-occurrence and other statistical techniques have proven that keyword searching is a passable solution for some disciplines with highly specific vocabularies and nearly useless in all others. and, of course, the real trick is to match the vocabulary of the seeker of information with that of the information resource. keyword searching not only doesn't take into account different terms for the same concepts, it doesn't take into account materials in other languages or different user levels (i.e. searching for children will probably need to be different than searching done by adults, and libraries actually use different subject access schemes for childrens' materials). and non-textual items (software, graphics, sound) do not respond at all to keyword searching. there is no magical, effortless way to create an organization for information; today the best tools are a clearly defined classification scheme and a human indexer. at least a classification scheme or indexing scheme gives the searcher a chance to develop a rational strategy for searching. the importance of organizational tools cannot be overstated. what it all comes down to is that if we can't find the information we need, it doesn't matter if it exists or not. if we don't find it, we don't encounter it, then it isn't information. there are undoubtedly millions of bytes of files on the net that for all practical purposes are non-existant . my biggest fear in relation to the information highway is that intellectual organization and access will be provided by the commercial world as a value-added service. so the materials will exist, even at an affordable price, but it will cost real money to make use of the tools that will make it possible for you to find the information you need. if we don't provide these finding tools as part of the public resource, then we aren't providing the information to the public. dissemination there's a lot of talk about the "electronic library." actually, there's a lot written about the electronic library, and probably much of it ends up on paper. most of us agree that for anything longer than a one-screen email message, we'd much rather read documents off a paper page than off a screen. while we can hope that screen technologies will eventually produce something that truly substitutes for paper, this isn't true today. so what happens with all of those electronic works that we're so eager to store and make available? do we reverse the industrial revolution and return printing of documents to a cottage industry taking place in homes, offices and libraries? many people talk about their concerns for the "last mile" - for the delivery of information into every home. i'm concerned about the last yard. we can easily move information from one computer to another, but how do we get it from the computer to the human being in the proper format? not all information is suited to electronic use. think of the auto repair manuals that you drag under the car and drip oil on. think of children's books, with their drool-proof pages. even the library of congress has announced that they are undertaking a huge project to digitize million items from their collection. then what? how do they think we are going to make use of those materials? there are times when i can only conclude that we have been gripped by some strange madness. i have fantasies of kidnapping the entire membership of the administration's iitf committees and tying them down in front of " screens with really bad flicker and forcing them to read the whole of project gutenberg's electronic copy of moby dick. maybe then we'd get some concern about the last yard. in conclusion: no amount of wiring will give us universal access just adding more files and computers to gopherspace, webspace and ftpspace will not give us better access and commercial information systems can be expected to be.... commercial thanks to meng weng wong whose html formatting i mostly stole ************************************** * copyright karen coyle, * * * * this document may be * * circulated freely on the net * * with this statement included. * * for any commercial use, or * * publication (including electronic * * journals), you must obtain the * * permission of the author * * kcoyle@kcoyle.net * ************************************** back to karen coyle's home page none none thesaurus islamicus foundation عربى the thesaurus islamicus foundation is a non-profit academic organisation that was founded to support and advance the protection, preservation and study of the islamic intellectual and artistic heritage. it specialises in scholarly publishing, fine book design, and the care and management of manuscript collections. the foundation has offices in cairo, egypt; the university of cambridge, england; and stuttgart, germany. the principal projects of the foundation are: through the sunna project the foundation seeks to assemble the entirety of hadith literature – that is, the literature comprising narrations of the sayings and actions of the prophet muhammad – and to prepare and publish definitive critical editions of every hadith collection. in the foundation’s cairo offices, nearly one hundred scholars of hadith are engaged in the study and comparison of manuscript and printed copies of hadith collections from libraries and museums around the world. to date, the sunna project has produced the nineteen-volume hadith encyclopaedia, which includes the six canonical hadith collections as well as the muwatta of imam malik ibn anas. the printed edition of the hadith encyclopaedia is supplemented by a cd-rom and the international hadith studies association network website (ihsan) – www.ihsanetwork.org these allow for the full text of the printed edition to be searched according to various criteria. the foundation has recently published a new fourteen volume edition of the musnad of imam ahmad ibn hanbal. this edition contains over hadith that have never before appeared in any printed edition of the musnad, but which are present in the earliest and most reliable manuscripts. the sunna project is affiliated with the prince alwaleed bin talal centre of islamic studies, university of cambridge. the islamic manuscript association is an international non-profit organisation dedicated to protecting islamic manuscript collections and supporting those who work with them. it was formed in response to the urgent need to address the poor preservation and inaccessibility of many islamic manuscript collections around the world. the association articulates standards and guidelines for best practice in cataloguing, conservation, digitisation and academic publishing so that islamic manuscript collections may be made more accessible and preserved for posterity. it promotes excellence in scholarship on islamic manuscripts, particularly islamic codicology and disciplines related to the care and management of islamic manuscript collections, and provides a platform for sharing this scholarship at its annual conference at the university of cambridge. the association awards grants to support the care of islamic manuscript collections and advance scholarship on islamic manuscripts. it also organises short courses in cataloguing, conservation, digitisation and academic publishing, as well as an annual workshop on islamic codicology in cooperation with cambridge university library. the islamic manuscript association is affiliated with the prince alwaleed bin talal centre of islamic studies, university of cambridge. the foundation has signed an agreement with the national library of egypt (dar al-kutub) and the egyptian ministry of culture to assist with the preservation, conservation and curation of the national library’s manuscript collection and to work with the national library to establish it as a regional leader in collection care and management. the national library possesses around , manuscript titles. it is the largest manuscript collection in the arab world and one of the most important collections of islamic manuscripts worldwide. the goals of the project include re-designing and re-equipping the national library’s two existing preservation and conservation laboratories, including imaging facilities; designing and equipping a new conservation laboratory; redesigning and reequipping the manuscript storage exhibition areas; the continued professional development of the national library’s preservation, conservation and exhibition staff; cataloguing selected areas of the manuscript collection; and preparing publications and promotional materials for and about the national library. editio electrum is the foundation’s design studio. the studio carries out the foundation’s efforts to revive the traditional arts of the islamic book. the studio combines traditional workshop techniques with the latest advances in digital design and printing technology to develop the visual language of medieval illumination in a new and exciting medium. fine art prints of qur’anic illumination produced by editio electrum have been exhibited at the museum of applied arts in frankfurt, germany; beit al qur’an in manama, bahrain; the dubai chamber of commerce in dubai, united arab emirates; the king faisal foundation in riyadh, saudi arabia; the national library and archives of egypt in cairo, egypt; and the american institute of graphic arts in new york, usa. in , the foundation’s shama’il al-nabi, designed by editio electrum, was the first arabic book to win a place in the american institute of graphic arts’ / annual book design competition. the islamic art network provides a number of important resources for islamic art and architectural historians. its online digital photo archive, which was prepared with the support and permission of egypt’s supreme council of antiquities, contains over , images of the islamic architectural monuments of cairo. in cooperation with the rare books and special collections library of the american university in cairo and dr alaa el-habashi, the network has digitised and made available online the french and arabic language issues of the bulletin of the comité de conservation des monuments de l’art arabe, the standard resource for the study of the islamic architecture of cairo. the bulletin describes the restoration work carried out on the islamic monuments of cairo from - . it includes full descriptions of the monuments and their histories as well as photographs, drawings, and plans. some of the monuments recorded in the bulletin have disappeared and others were disfigured by later restoration work. for this reason, it is invaluable. only the islamic art network and the university of pennsylvania have complete collections of the bulletin. additionally, the network has digitised important out-of-print books and rare articles on islamic art and architecture and made them available online. among these publications are max herz pasha’s monograph entitled mosquee du sultan au caire and the memoires of the institut d’egypte. onetradition is the imprint dedicated to the foundation’s publications of spiritual and metaphysical writings from all major religious traditions. the onetradition project rests on the belief that the best way to understand and learn from the great religious traditions is through their most profound exponents. current projects in production include the translation of the tao teh ching into arabic and a complete critical edition of ibn ‘arabi’s al-futuhat al-makkiya. works that the foundation hopes to publish in the future under its onetradition imprint include the serial elucidations of d.a. freher and a korean translation of the hikam of ibn ‘ata’illah al-iskandari. qirab™ is dedicated to supporting the preservation of manuscripts within the islamic cultural sphere through knowledge sharing and open source technology. we publish arabic translations of technical texts and standards as well as produce innovative hardware and software designs. © thesaurus islamicus foundation. all rights reserved. internet alchemy, the blog of ian davis internet alchemy est. · · · · · · · · · · · · ·                      mon, oct , serverless: why microfunctions > microservices this post follows on from a post i wrote a couple of years back called why service architectures should focus on workflows. in that post i attempted to describe the fragility of microservice systems that were simply translating object-oriented patterns to the new paradigm. these systems were migrating domain models and their interactions from in-memory objects to separate networked processes. they were replacing in-process function calls with cross-network rpc calls, adding latency and infrastructure complexity. the goal was scalability and flexibility but, i argued, the entity modelling approach introduced new failure modes. i suggested a solution: instead of carving up the domain by entity, focus on the workflows. if i was writing that post today i would say “focus on the functions” because the future is serverless functions, not microservices. or, more brashly: microfunctions > microservices the industry has moved apace in the last years with a focus on solving the infrastructure challenges caused by running hundreds of intercommunicating microservices. containers have matured and become the de-facto standard for the unit of microservice deployment with management platforms such as kubernetes to orchestrate them and frameworks like grpc for robust interservice communication. the focus still tends to be on interacting entities though: when placing an order the “order service” talks to the “customer service” which reserves items by talking to the “stock service” and the “payment service” which talks to the “payment gateway” after first checking with the “fraud service”. when the order needs to be shipped the “shipping service” asks the “order service” for orders that need to be fulfilled and tells the “stock service” to remove the reservation, then to the “customer service” to locate the customer etc. all of these services are likely to be persisting state in various backend databases. microservices are organized as vertical slices through the domain: the same problems still exist: if the customer service is overwhelmed by the shipping service then the order service can’t take new orders. the container manager will, of course, scale up the number of customer service instances and register them with the appropriate load balancers, discovery servers, monitoring and logging. however, it cannot easily cope with a critical failure in this service, perhaps caused by a repeated bad request that panics the service and prevents multiple dependent services from operating properly. failures and slowdowns in response times are handled within client services through backoff strategies, circuit breakers and retries. the system as a whole increases in complexity but remains fragile. by contrast, in a serverless architecture, the emphasis is on the functions of the system. for this reason serverless is sometimes called faas – functions as a service. systems are decomposed into functions that encapsulate a single task in a single process. instead of each request involving the orchestration of multiple services the request uses an instance of the appropriate function. rather than the domain model being exploded into separate networked processes its entities are provided in code libraries compiled into the function at build time. calls to entity methods are in-process so don’t pay the network latency or reliability taxes. in this paradigm the “place order” function simply calls methods on customer, stock and payment objects, which may then interact with the various backend databases directly. instead of a dozen networked rpc calls, the function relies on - database calls. additionally, if a function is particularly hot it can be scaled directly without affecting the operation of other functions and, crucially, it can fail completely without taking down other functions. (modulo the reliability of databases which affect both styles of architecture identically.) microfunctions are horizontal slices through the domain: the advantages i wrote last time still hold up when translated to serverless terminology: deploying or retiring a function becomes as simple as switching it on or off which leads to greater freedom to experiment. scaling a function is limited to scaling a single type of process horizontally and the costs of doing this can be cleanly evaluated. the system as a whole becomes much more robust. when a function encounters problems it is limited to a single workflow such as issuing invoices. other functions can continue to operate independently. latency, bandwidth use and reliability are all improved because there are fewer network calls. the function still relies on the database and other support systems such as lock servers, but most of the data flow is controlled in-process. the unit of testing and deployment is a single function which reduces the complexity and cost of maintenance. one major advantage that i missed is the potential for extreme cost savings through scale, particularly the scale attainable by running on public shared infrastructure. since all the variability of microservice deployment configurations is abstracted away into a simple request/response interface the microfunctions can be run as isolated shared-nothing processes, billed only for the resources they use in their short lifetime. anyone who has costed for redundant microservices simply for basic resilience will appreciate the potential here. although there are number of cloud providers in this space (aws lambda, google cloud functions, azure functions) serverless is still an emerging paradigm with the problems that come with immaturity. adrian coyler recently summarized an excellent paper and presentation dealing with the challenges of building serverless systems which highlights many of these, including the lack of service level agreements and loose performance guarantees. it seems almost certain though that these will improve as the space matures and overtakes the microservice paradigm. other posts tagged as architecture, distributed-systems, technology, serverless, faas earlier posts gorecipes: fin wed, mar another blog refresh sun, feb why service architectures should focus on workflows mon, mar help me crowdfund my game amberfell mon, nov none none none none none none none none none none none none none none none none none none none open source exile open source exile an open sourcer in exile #christchurchmosqueshootings how would we know when it was time to move from tei/xml to tei/json? whither tei? the next thirty years thoughts on the ndfnz wikipedia panel feedback on nlnz ‘digitalnz concepts api‘ bibframe a wikipedia strategy for the royal society of new zealand prep notes for ndf demonstration metadata vocabularies lodlam nz cares about unexpected advice goodbye 'social-media' world recreational authority control thoughts on "letter about the tei" from martin mueller unit testing framework for xsl transformations? is there a place for readers' collectives in the bright new world of ebooks? howto: deep linking into the nzetc site epubs and quality what librarything metadata can the nzetc reasonable stuff inside it's cc'd epubs? interlinking of collections: the quest continues ebook readers need openurl resolvers thoughts on koha data and data modelling and underlying assumptions learning xslt . part ; finding names legal māori archive why card-based records aren't good enough robert benchley it happened in history! (go to it happened in history archives) robert benchley one of america's greatest humorists just happened to have been an accomplished actor, drama critic, and author, as well.  born on september , , in worcester, massachusetts, robert benchley seemed destined for success.  in school, he built a reputation for his creative interpretation of essay assignments.  when asked to write an essay about something practical, he penned a theme entitled "how to embalm a corpse."  for an assignment concerning the dispute between the united states and canada over newfoundland fishing rights, he wrote an essay from the point of view of the fish. upon leaving harvard in , benchley joined the staff of the new york tribune.  he wasn't a very good reporter, however, and his editors soon switched him to feature writing, which served his talents better.  he wrote stories and humorous essays such as "did prehistoric man walk on his head?"  after serving in world war i, he returned to new york and accepted a position as managing editor at vanity fair magazine, where he met fellow writer and wit dorothy parker, who soon became his closest  friend.  the two developed a reputation as office pranksters.  once, after management asked its staff not to discuss their salaries, benchley and parker had them printed on placards that they wore around their necks. the two literary cut-ups formed the nucleus of a group of writers, actors, and artists who met for lunch at new york's algonquin hotel to share sparkling conversation, juicy gossip, and scathing insults.  together with alexander woollcott, george s. kaufman, marc connelly, harpo marx, and others, they became known as the algonquin round table.  benchley thought so highly of parker that he resigned from vanity fair when she was fired.  he took a position as drama critic for life magazine and the new yorker.  but, since he knew absolutely nothing about the theater, he quickly turned his reviews into humorous essays. he once wrote a review of the new york city telephone directory.  he said it had no plot.  he was also a notorious prevaricator.  when asked to provide a brief biography of himself for an encyclopedia, he wrote that he was born on the isle of wight, wrote a tale of two cities, married a princess in portugal, and was buried in westminster abbey. through his work in life magazine, as well as in books such as pluck and luck ( ) and early worm ( ), benchley emerged as one of america's most popular and well-regarded writers.  he had an uncanny knack for dissecting the comic futility of society during the roaring twenties.  his subtle, whimsical brand of humor played well against the struggles of the common man.  often his treatises spun off on whimsical, nonsensical tangents.  one of benchley's friends, donald ogden stewart (the philadelphia story), described his sense of humor as "crazy."  nevertheless, it found a receptive audience among his pre-depression era readers.  benchley began working in movies in , with a reprise of the treasurer's report in one of the earliest short films to feature sound.  in , he began writing for feature films, marking his debut with the sport parade, in which he also co-starred as a broadcaster.  he continued to play comedic supporting roles in the years to come, typically cast as a bumbling yet lovable sophisticate, a cocktail glass or cigarette-and-holder clenched firmly in hand.  in , he appeared in the alfred hitchcock thriller, foreign correspondent, a film to which he also contributed dialogue. his work was collected in many books, including from bed to worse ( ), why does nobody collect me? ( ), and my ten years in a quandary, and how it grew ( ). robert benchley, who once said, "it took me fifteen years to discover that i had no talent for writing, but i couldn't give it up because by that time i was too famous,"  died on november , , at the peak of his fame.  benchley's son, nathanial, was a well-regarded novelist and children's book author, while his grandson, peter, later became famous as author of the book that inspired the film, jaws. discover robert benchley at amazon.com search now: indulge yourself - check out today's best-selling fiction - nonfiction - dvds - home -   note: all material on this site is copyright protected.  no portion of this material may be copied or reproduced, either electronically,  mechanically, or by any other means, for resale or distribution without the written consent of the author.  contact the editors for right to reprint.  all copy has been dated and registered with the american society of authors and writers.  copyright by the american society of authors and writers.             sorry, pal… – the thrilling detective web site skip to content the thrilling detective web site come on down these mean streets… come fly with us (advertising rates) menu home the latest posts this just in have you heard the news? the p.i. calendar (what’s up) faqs detectionary (glossary) the dick of the day word on the street the p.i. calendar have you heard the news? hardcovers paperbacks digital audio collections & anthologies reference, non-fiction & true crime video, dvd, blu-ray & streaming comics & graphic novels browse this site the hall of fame authors & creators private eyes/alphabetical films/alphabetical radio shows & other audio drama/alphabetical television/alphabetical television/chronological comic books, strips & graphic novels/alphabetical my back pages (trivia) wit & wisdom writers & writing this day in p.i. history murder in the library my scrapbook my bookshelf free books! fiction non-fiction essays & articles reviews support this site come fly with us (advertising rates) cough it up! friends of the thrilling detective web site follow us! contact us sorry, pal… kevinburtonsmith uncategorized march , august , most likely it’s not your fault. you haven’t broken the internet. chances are, the page you’re looking for doesn’t exist. at least not yet. or at least not here. maybe i changed the title. maybe i simply zigged when i should have zagged. maybe the page isn’t quite ready for prime time. but most likely, you’re clicking on a link to the old site that simply needs to be updated and hasn’t migrated here yet. it’s been a long process, transferring (and updating) close to pages to our new location. even now there are still some strays waiting to be rounded up. but rest assured, we’re working on it. and hopefully, the search engines and other people’s sites will begin to catch up as well. in the mean time, if you still haven’t found what you’re looking for, don’t be a bono. you could try using our links at the top of every page. you tried and you tried, and still can’t get no satisfaction? drop me a line or fill out the comment section below, to grease the wheels. let me know what page you were on, and which link you were following, and i’ll move it to the top of my to do list. usually takes a few hours. or a few days. please be patient. and hold on to your ticket stub, just in case. same goes for errors, typos and other mistakes. let us know, and we’ll take care of it. eventually… bribes also cheerfully taken. the photo is from the film detektiv down. share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) more click to email this to a friend (opens in new window) like this: like loading... tagged error published march , august , post navigation previous post lin melchan next post robert bogerud (detektiv downs) thoughts on “sorry, pal…” claire taylor says: april , at : am followed a link to film of carvalho – tatuaje, got the error message. would be great to get more info about this loading... reply kevinburtonsmith says: april , at : am ah, one of the strays from the other site… here ya go. https://thrillingdetective.wpcomstaging.com/ / / /pepe-carvalho/ loading... reply michael d jeter says: april , at : pm i saw a post that said you had a good overview of a man called hawk. i really like the series, and would be interested. loading... reply tana cochran says: april , at : pm hi. i was trying to get to your page on the falcon with this link: http://www.thrillingdetective.com/falcon.html via a google search on the falcon. it took me to your “bum steer: page: https://thrillingdetective.wpcomstaging.com/ / / /sorry-pal/ loading... reply kevinburtonsmith says: april , at : pm it’s now at https://thrillingdetective.com/ / / /the-falcon-aka-michael-waring-gay-stanhope-falcon-gay-lawrence-tom-lawrence/, but the entire alphabetical index of eyes can be found at https://thrillingdetective.com/ / / /private-eyes-alphabetical/ loading... reply cullenbgallagher says: april , at : pm love ed and am! way overdue to re-read fredric brown. loading... reply hermajesty says: april , at : pm hmmmmm…. looking for philip marlowe….. tailed him here but he must have seen me and slipped out the back. i’ll be at the bar if you find him. loading... reply pedanther says: may , at : am philip marlowe is currently residing at https://thrillingdetective.com/ / / /philip-marlowe/ loading... reply tom tattershall says: may , at : pm link to the carrie cashin page isn’t working. get “bum steer” message. loading... reply kevinburtonsmith says: may , at : pm i’m on it! https://thrillingdetective.com/ / / /carrie-cashin/ loading... reply grokenstein says: july , at : am bookmarked “keri krane” some time ago; now it’s lost in the shuffle. tried the menus–no dice. can we get a fixed link, please? thanks! loading... reply kevinburtonsmith says: july , at : pm i’ll get on it today. loading... reply leave a reply cancel reply search this site search for: a wordpress.com website.   loading comments...   write a comment... email (required) name (required) website send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. %d bloggers like this: dshr's blog: hardware i/o virtualization dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, december , hardware i/o virtualization at enterprisetech.com, timothy prickett morgan has an interesting post entitled a rare peek into the massive scale of aws. it is based on a talk by amazon's james hamilton at the re:invent conference. morgan's post provides a hierarchical, network-centric view of the aws infrastructure: regions, of them around the world, contain availability zones (az). the azs are arranged so that each region contains at least and up to datacenters. morgan estimates that there are close to datacenters in total, each with racks, burning - mw. each rack holds to servers. azs are no more than ms apart measured in network latency, allowing for synchronous replication. this means the azs in a region are only a couple of kilometres apart, which is less geographic diversity than one might want, but a disaster still has to have a pretty big radius to take out more than one az. the datacenters in an az are not more than us apart in latency terms, close enough that a disaster might take all the datacenters in one az out. below the fold, some details and the connection between what amazon is doing now, and what we did in the early days of nvidia. amazon uses custom-built hardware, including network hardware, and their own network software. doing so is simpler and more efficient than generic hardware and software because they only need to support a very restricted set of configurations and services. in particular they build their own network interface cards (nics). the reason is particularly interesting to me, as it is to solve exactly the same problem that we faced as we started nvidia more than two decades ago. the state-of-the-art of pc games, and thus pc graphics, were based on windows, at that stage little more than a library on top of ms-dos. the game was the only application running on the hardware. it didn't have to share the hardware with, and thus need the operating system (os) to protect it from, any other application. coming from the unix world we knew how the os shared access to physical hardware devices, such as the graphics chip, among multiple processes while protecting them (and the operating system) from each other. processes didn't access the devices directly, they made system calls which invoked device driver code in the os kernel that accessed the physical hardware on their behalf. we understood that windows would have to evolve into a multi-process os with real inter-process protection. our problem, like amazon's, was two-fold; latency and the variance of latency. if the games were to provide arcade performance on mid- s pcs, there was no way the game software could take the overhead of calling into the os to perform graphics operations on its behalf. it had to talk directly to the graphics chip, not via a driver in the os kernel. if there would have been only a single process, such as the x server, doing graphics this would not have been a problem. using the memory management unit (mmu), the hardware provided to mediate access of multiple processes to memory, the os could have mapped the graphic chip's io registers into that process' address space. that process could access the graphics chip with no os overhead. other processes would have to use inter-process communications to request graphics operations, as x clients do. sega's virtua fighter on nv because we expected there to be many applications simultaneously doing graphics, and they all needed low, stable latency, we needed to make it possible for the os safely to map the chip's registers into multiple processes at one time. we devoted a lot of the first nvidia chip to implementing what looked to the application like independent sets of i/o registers. the os could map one of the sets into a process' address space, allowing it to do graphics by writing directly to these hardware registers. the technical name for this is hardware i/o virtualization; we pioneered this technology in the pc space. it provided the very low latency that permitted arcade performance on the pc, despite other processes doing graphics at the same time. and because the competition between the multiple process' accesses to their virtual i/o resources was mediated on-chip as it mapped the accesses to the real underlying resources, it provided very stable latency without the disruptive long tail that degrades the user experience. amazon's problem was that, like pcs running multiple graphics applications on one real graphics card, they run many virtual machines (vms) on each real server. these vms have to share access to the physical network interface card (nic). mediating this in software in the hypervisor imposes both overhead and variance. their answer was enhanced nics: the network interface cards support single root i/o virtualization (sr-iov), which is an extension to the pci-express protocol that allows the resources on a physical network device to be virtualized. sr-iov gets around the normal software stack running in the operating system and its network drivers and the hypervisor layer that they sit on. it takes milliseconds to wade down through this software from the application to the network card. it only takes microseconds to get through the network card itself, and it takes nanoseconds to traverse the light pipes out to another network interface in another server. “this is another way of saying that the only thing that matters is the software latency at either end,” explained hamilton. sr-iov is much lighter weight and gives each guest partition on a virtual machine its own virtual network interface card, which rides on the physical card. this, as shown on hamilton's graph, provides much less variance in latency: the new network, after it was virtualized and pumped up, showed about a x drop in latency compared to the old network at the th percentile for latency on data transmissions, and at the . th percentile the latency dropped by about a factor of x. the importance of reducing the variance of latency for web services at amazon scale is detailed in a fascinating, must-read paper, the tail at scale by dean and barroso. amazon had essentially the same problem we had, and came up with the same basic hardware solution - hardware i/o virtualization. posted by david. at : am labels: amazon, networking comments: david. said... in the first of a series rich miller at data center frontier adds a little to the timothy prickett morgan post. september , at : pm david. said... pettyofficer 's video history of nvidia gpus is worth watching. october , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ▼  december ( ) crypto-currency as a basis for preservation economic failures of https hardware i/o virtualization "official" senate cia torture report talk at fall cni a note of thanks henry newman's farewell column ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. andromeda yelton andromeda yelton i haven’t failed, i’ve tried an ml approach that *might* work! when last we met i was turning a perfectly innocent neural net into a terribly ineffective one, in an attempt to get it to be better at face recognition in archival photos. i was also (what cultural heritage technology experience would be complete without this?) being foiled by metadata. so, uh, i stopped using metadata. &# ; continue reading i haven&# ;t failed, i&# ;ve tried an ml approach that *might*&# ;work! &# ; i haven’t failed, i’ve just tried a lot of ml approaches that don’t work &# ;let&# ;s blog every friday,&# ; i thought. &# ;it&# ;ll be great. people can see what i&# ;m doing with ml, and it will be a useful practice for me!&# ; and then i went through weeks on end of feeling like i had nothing to report because i was trying approach after approach to this one problem that simply &# ; continue reading i haven&# ;t failed, i&# ;ve just tried a lot of ml approaches that don&# ;t&# ;work &# ; this time: speaking about machine learning no tech blogging this week because most of my time was taken up with telling people about ml instead! one talk for an internal harvard audience, &# ;alice in dataland&# ;, where i explained some of the basics of neural nets and walked people through the stories i found through visualizing hamlet data. one talk for the &# ; continue reading this time: speaking about machine&# ;learning &# ; archival face recognition for fun and nonprofit in , dominique luster gave a super good code lib talk about applying ai to metadata for the charles &# ;teenie&# ; harris collection at the carnegie museum of art &# ; more than , photographs of black life in pittsburgh. they experimented with solutions to various metadata problems, but the one that&# ;s stuck in my head since &# ; continue reading archival face recognition for fun and&# ;nonprofit &# ; sequence models of language: slightly irksome not much ai blogging this week because i have been buried in adulting all week, which hasn&# ;t left much time for machine learning. sadface. however, i&# ;m in the last week of the last deeplearning.ai course! (well. of the deeplearning.ai sequence that existed when i started, anyway. they&# ;ve since added an nlp course and a gans &# ; continue reading sequence models of language: slightly&# ;irksome &# ; adapting coursera’s neural style transfer code to localhost last time, when making cats from the void, i promised that i&# ;d discuss how i adapted the neural style transfer code from coursera&# ;s convolutional neural networks course to run on localhost. here you go! step : first, of course, download (as python) the script. you&# ;ll also need the nst_utils.py file, which you can access via &# ; continue reading adapting coursera&# ;s neural style transfer code to&# ;localhost &# ; dear internet, merry christmas; my robot made you cats from the void recently i learned how neural style transfer works. i wanted to be able to play with it more and gain some insights, so i adapted the coursera notebook code to something that works on localhost (more on that in a later post), found myself a nice historical cat image via dpla, and started mashing it &# ; continue reading dear internet, merry christmas; my robot made you cats from the&# ;void &# ; this week in my ai after visualizing a whole bunch of theses and learning about neural style transfer and flinging myself at t-sne i feel like i should have something meaty this week but they can&# ;t all be those weeks, i guess. still, i&# ;m trying to hold myself to friday ai blogging, so here are some work notes: finished course &# ; continue reading this week in my&# ;ai &# ; though these be matrices, yet there is method in them. when i first trained a neural net on , theses to make hamlet, one of the things i most wanted to do is be able to visualize them. if word vec places documents &# ;near&# ; each other in some kind of inferred conceptual space, we should be able to see some kind of map of them, yes? &# ; continue reading though these be matrices, yet there is method in&# ;them. &# ; of such stuff are (deep)dreams made: convolutional networks and neural style transfer skipped fridai blogging last week because of thanksgiving, but let&# ;s get back on it! top-of-mind today are the firing of ai queen timnit gebru (letter of support here) and a couple of grant applications that i&# ;m actually eligible for (this is rare for me! i typically need things for which i can apply in my &# ; continue reading of such stuff are (deep)dreams made: convolutional networks and neural style&# ;transfer &# ; zbw labs zbw labs data donation to wikidata, part : country/subject dossiers of the th century press archives the world's largest public newspaper clippings archive comprises lots of material of great interest particularly for authors and readers in the wikiverse. zbw has digitized the material from the first half of the last century, and has put all available metadata under a cc license. more so, we are donating that data to wikidata, by adding or enhancing items and providing ways to access the dossiers (called "folders") and clippings easily from there. challenges of modelling a complex faceted classification in wikidata that had been done for the persons' archive in - see our prior blog post. for persons, we could just link from existing or a few newly created person items to the biographical folders of the archive. the countries/subjects archives provided a different challenge: the folders there were organized by countries (or continents, or cities in a few cases, or other geopolitical categories), and within the country, by an extended subject category system (available also as skos). to put it differently: each folder was defined by a geo and a subject facet - a method widely used in general purpose press archives, because it allowed a comprehensible and, supported by a signature system, unambiguous sequential shelf order, indispensable for quick access to the printed material.folders specifically about one significant topic (like the treaty of sèvres) are rare in the press archives, whereas country/subject combinations are rare among wikidata items - so direct linking between existing items and pm folders was hardly achievable. the folders in themselves had to be represented as wikidata items, just like other sources used there. here however we did not have works or scientific articles, but thematic mini-collections of press clippings, often not notable in themselves and normally without further formal bibliographic data. so a class of pm country/subject folder was created (as subclass of dossier, a collection of documents). aiming at items for each folder - and having them linked via pm folder id (p ) to the actual press archive folders was yet only part of the solution.in order to represent the faceted structure of the archive, we needed anchor points for both facets. that was easy for the geographical categories: the vast majority of them already existed as items in wikidata, a few historical ones, such as russian peripheral countries, had to be created. for the subject categories, the situation was much different. categories such as the country and its people, politics and economy, general or postal services, telegraphy and telephony were constructed as baskets for collecting articles on certain broader topics. they do not have an equivalent in wikidata, which tries to describe real world entities or clear-cut concepts. we decided therefore to represent the categories of the subject category system with their own items of type pm subject category. each of the about categories is connected to the upper one via a "part of" (p ) property, thus forming a five-level hierarchy.more implementation subtletiesfor both facets, according wikidata properties where created as "pm geo code" (p ) and "pm subject code" (p ). as external identifiers, they link directly to lists of subjects (e.g., for japan) or geographical entities (e.g., for the country ..., general). for all countries where the press archives material has been processed - this includes the tedious task of clarifying the intellectual property rights status of each article -, the wikidata item for the country includes now a link to a list of all press archives dossiers about this country, covering the first half of the th century. the folders represented in wikidata (e.g., japan : the country ..., general) use "facet of" (p ) and "main subject" (p ) properties to connect to the items for the country and subject categories. thus, not only each of the , accessible folders of the pm country/subject archive is accessible via wikidata. since the structural metadata of pm is available, too, it can be queried in its various dimensions - see for example the list of top level subject categories with the number of folders and documents, or a list of folders per country, ordered by signature (with subtleties covered by a "series ordial" (p ) qualifier). the interactive map of subject folders as shown above is also created by a sparql query, and gives a first impression of the geographical areas covered in depth - or yet only sparsely - in the online archive.core areas: worldwide economy, worldwide colonialismthe online data reveals core areas of attention during years of press clippings collection until . economy, of course, was in the focus of the former hwwa (hamburg archive for the international economy), in germany and namely hamburg, as well as in every other country. more than half of all subject categories are part of the n economy section of the category system and give in , folders very detailed access to the field. about , of the almost , online documents of the archive are part of this section, followed by history and general politics, foreign policy, and public finance, down to more peripheral topics like settling and migration, minorities, justice or literature. originating in the history of the institution (which was founded as "zentralstelle des hamburgischen kolonialinstituts", the central office of the hamburg colonial institute) colonial efforts all over the world were monitored closely. we published with priority the material about the former german colonies, listed in the archivführer deutsche kolonialgeschichte (archive guide to the german colonial past, also interconnected to wikidata). originally collected to support the aggressive and inhuman policy of the german empire, it is now available to serve as research material for critical analysis in the emerging field of colonial and postcolonial studies.enabling future community effortswhile all material about the german colonies (and some about the italian ones) is online, and accessible now via wikidata, this is not true for the former british/french/dutch/belgian colonies. while japan or argentina are accessible completely, china, india or the us are missing, as well as most of the european countries. and while + folders about hamburg cover it's contemporary history quite well, the vast majority of the material about germany as a whole is only accessible "on premises" within zbw's locations. it however is available as digital images, and can be accessed through finding aids (in german), which in the reading rooms directly link to a document viewer. the metadata for this material is now open data and can be changed and enhanced in wikidata. a very selective example how that could work is a topic in german-danish history - the schleswig plebiscites. the pm folder about these events was not part of the published material, but got some interest with last year's centenary. the pm metadata on wikidata made it possible to create an according folder completely in wikidata, nordslesvig : historical events, with a (provisional) link to a stretch of images on a digitized film. while the checking and activation of these images for the public was a one-time effort in the context of an open science event, the creation of a new pm folder on wikidata may demonstrate how open metadata can be used by a dedicated community of knowledge to enable access to not-yet-open knowledge. current intellectual property law in the eu forbids open access to all digitized clippings from newspapers published in until , and all where the death date of a named author is not known until after . of course, we hope for a change in that obstrusive legislation in a not-so-far future. we are confident that the metadata about the material, now in wikidata, will help bridging the gap until it will finally be possible to use all digitized press archives contents as open scientific and educational resources, within and outside of the wikimedia projects. more information at wikiproject th century press archives, which links also to the code for creating this data donation. pressemappe . jahrhundert wikidata &# ; building the swib participants map   here we describe the process of building the interactive swib participants map, created by a query to wikidata. the map was intended to support participants of swib to make contacts in the virtual conference space. however, in compliance with gdpr we want to avoid publishing personal details. so we choose to publish a map of institutions, to which the participants are affiliated. (obvious downside: the un-affiliated participants could not be represented on the map). we suppose that the method can be applied to other conferences and other use cases - e.g., the downloaders of scientific software or the institutions subscribed to an academic journal. therefore, we describe the process in some detail. we started with a list of institution names (with country code and city, but without person ids), extracted and transformed from our conftool registration system, saved it in csv format. country names were normalized, cities were not (and only used for context information). we created an openrefine project, and reconciled the institution name column with wikidata items of type q (organization, and all its subtypes). we included the country column (-> p , country) as relevant other detail, and let openrefine “auto-match candidates with high confidence”. of our original set of country/institution entries, were automaticaly matched via the wikidata reconciliation service. at the end of the conference, institutions were identified and put on the map (data set). we went through all un-matched entries and either a) selected one of the suggested items, or b) looked up and tweaked the name string in wikidata, or in google, until we found an according wikipedia page, openend the linked wikidata object from there, and inserted the qid in openrefine, or c) created a new wikidata item (if the institution seemed notable), or d) attached “not yet determined” (q ) where no wikidata item (yet) exists, or e) attached “undefined value” (q ) where no institution had been given the results were exported from openrefine into a .tsv file (settings) again via a script, we loaded conftool participants data, built a lookup table from all available openrefine results (country/name string -> wd item qid), aggregated participant counts per qid, and loaded that data into a custom sparql endpoint, which is accessible from the wikidata query service. as in step , for all (new) institution name strings, which were not yet mapped to wikidata, a .csv file was produced. (an additional remark: if no approved custom sparql endpoint is available, it is feasible to generate a static query with all data in it’s “values” clause.) during the preparation of the conference, more and more participants registered, which required multiple loops: use the csv file of step and re-iterate, starting at step . (since i found no straightforward way to update an existing openrefine project with extended data, i created a new project with new input and output files for every iteration.) finally, to display the map we could run a federated query on wdqs. it fetches the institution items from the custom endpoint and enriches them from wikidata with name, logo and image of the institution (if present), as well as with geographic coordinates, obtained directly or indirectly as follows: a) item has “coodinate location” (p ) itself, or b) item has “headquarters location” item with coordinates (p /p ), or c) item has “located in administrative entity” item with coordinates (p /p ), or c) item has “country” item (p /p ) applying this method, only one institution item could not be located on the map. data improvements the way to improve the map was to improve the data about the items in wikidata - which also helps all future wikidata users. new items for a few institutions, new items were created: burundi association of librarians, archivists and documentalists fao representation in kenya aurora information technology istituto di informatica giuridica e sistemi giudiziari for another institutions, mostly private companies, no items were created due to notability concerns. everything else already had an item in wikidata! improvement of existing items in order to improve the display on the map, we enhanced selected items in wikidata in various ways: add english label add type (instance of) add headquarter location add image and/or logo and we hope, that participants of the conference also took the opportunity to make their institution “look better”, by adding for example an image of it to the wikidata knowledge base. putting wikidata into use for a completely custom purpose thus created incentives for improving “the sum of all human knowledge” step by tiny step.       wikidata for authorities linked data &# ; deutsch journal map: developing an open environment for accessing and analyzing performance indicators from journals in economics by franz osorio, timo borst introduction bibliometrics, scientometrics, informetrics and webometrics have been both research topics and practical guidelines for publishing, reading, citing, measuring and acquiring published research for a while (hood ). citation databases and measures had been introduced in the s, becoming benchmarks both for the publishing industry and academic libraries managing their holdings and journal acquisitions that tend to be more selective with a growing number of journals on the one side, budget cuts on the other. due to the open access movement triggering a transformation of traditional publishing models (schimmer ), and in the light of both global and distributed information infrastructures for publishing and communicating on the web that have yielded more diverse practices and communities, this situation has dramatically changed: while bibliometrics of research output in its core understanding still is highly relevant to stakeholders and the scientific community, visibility, influence and impact of scientific results has shifted to locations in the world wide web that are commonly shared and quickly accessible not only by peers, but by the general public (thelwall ). this has several implications for different stakeholders who are referring to metrics in dealing with scientific results:   with the rise of social networks, platforms and their use also by academics and research communities, the term 'metrics' itself has gained a broader meaning: while traditional citation indexes only track citations of literature published in (other) journals, 'mentions', 'reads' and 'tweets', albeit less formal, have become indicators and measures for (scientific) impact. altmetrics has influenced research performance, evaluation and measurement, which formerly had been exclusively associated with traditional bibliometrics. scientists are becoming aware of alternative publishing channels and both the option and need of 'self-advertising' their output. in particular academic libraries are forced to manage their journal subscriptions and holdings in the light of increasing scientific output on the one hand, and stagnating budgets on the other. while editorial products from the publishing industry are exposed to a global competing market requiring a 'brand' strategy, altmetrics may serve as additional scattered indicators for scientific awareness and value. against this background, we took the opportunity to collect, process and display some impact or signal data with respect to literature in economics from different sources, such as 'traditional' citation databases, journal rankings and community platforms resp. altmetrics indicators: citec. the long-standing citation service maintainted by the repec community provided a dump of both working papers (as part of series) and journal articles, the latter with significant information on classic impact factors such as impact factor ( and years) and h-index. rankings of journals in economics including scimago journal rank (sjr) and two german journal rankings, that are regularly released and updated (vhb jourqual, handelsblatt ranking). usage data from altmetric.com that we collected for those articles that could be identified via their digital object identifier. usage data from the scientific community platform and reference manager mendeley.com, in particular the number of saves or bookmarks on an individual paper. requirements a major consideration for this project was finding an open environment in which to implement it. finding an open platform to use served a few purposes. as a member of the "leibniz research association," zbw has a commitment to open science and in part that means making use of open technologies to as great extent as possible (the zbw - open scienc...). this open system should allow direct access to the underlying data so that users are able to use it for their own investigations and purposes. additionally, if possible the user should be able to manipulate the data within the system. the first instance of the project was created in tableau, which offers a variety of means to express data and create interfaces for the user to filter and manipulate data. it also can provide a way to work with the data and create visualizations without programming skills or knowledge. tableau is one of the most popular tools to create and deliver data visualization in particular within academic libraries (murphy ). however, the software is proprietary and has a monthly fee to use and maintain, as well as closing off the data and making only the final visualization available to users. it was able to provide a starting point for how we wanted to the data to appear to the user, but it is in no way open. challenges the first technical challenge was to consolidate the data from the different sources which had varying formats and organizations. broadly speaking, the bibliometric data (citec and journal rankings) existed as a spread sheet with multiple pages, while the altmetrics and mendeley data came from a database dumps with multiple tables that were presented as several csv files. in addition to these different formats, the data needed to be cleaned and gaps filled in. the sources also had very different scopes. the altmetrics and mendeley data covered only journals, the bibliometric data, on the other hand, had more than , journals. transitioning from tableau to an open platform was big challenge. while there are many ways to create data visualizations and present them to users, the decision was made to use r to work with the data and shiny to present it. r is used widely to work with data and to present it (kläre ). the language has lots of support for these kinds of task over many libraries. the primary libraries used were r plotly and r shiny. plotly is a popular library for creating interactive visualizations. without too much work plotly can provide features including information popups while hovering over a chart and on the fly filtering. shiny provides a framework to create a web application to present the data without requiring a lot of work to create html and css. the transition required time spent getting to know r and its libraries, to learn how to create the kinds of charts and filters that would be useful for users. while shiny alleviates the need to create html and css, it does have a specific set of requirements and structures in order to function. the final challenge was in making this project accessible to users such that they would be able to see what we had done, have access to the data, and have an environment in which they could explore the data without needing anything other than what we were providing. in order to achieve this we used binder as the platform. at it's most basic binder makes it possible to share a jupyter notebook stored in a github repository with a url by running the jupyter notebook remotely and providing access through a browser with no requirements placed on the user. additionally, binder is able to run a web application using r and shiny. to move from a locally running instance of r shiny to one that can run in binder, instructions for the runtime environment need to be created and added to the repository. these include information on what version of the language to use,  which packages and libraries to install for the language, and any additional requirements there might be to run everything. solutions given the disparate sources and formats for the data, there was work that needed to be done to prepare it for visualization. the largest dataset, the bibliographic data, had several identifiers for each journal but without journal names. having the journals names is important because in general the names are how users will know the journals. adding the names to the data would allow users to filter on specific journals or pull up two journals for a comparison. providing the names of the journals is also a benefit for anyone who may repurpose the data and saves them from having to look them up. in order to fill this gap, we used metadata available through research papers in economics (repec). repec is an organization that seeks to "enhance the dissemination of research in economics and related sciences". it contains metadata for more than million papers available in different formats. the bibliographic data contained repec handles which we used to look up the journal information as xml and then parse the xml to find the title of the journal.  after writing a small python script to go through the repec data and find the missing names there were only journals whose names were still missing. for the data that originated in an mysql database, the major work that needed to be done was to correct the formatting. the data was provided as csv files but it was not formatted such that it could be used right away. some of the fields had double quotation marks and when the csv file was created those quotes were put into other quotation marks resulting doubled quotation marks which made machine parsing difficult without intervention directly on the files. the work was to go through the files and quickly remove the doubled quotation marks. in addition to that, it was useful for some visualizations to provide a condensed version of the data. the data from the database was at the article level which is useful for some things, but could be time consuming for other actions. for example, the altmetrics data covered only journals but had almost , rows. we could use the python library pandas to go through the all those rows and condense the data down so that there are only rows with the data for each column being the sum of all rows. in this way, there is a dataset that can be used to easily and quickly generate summaries on the journal level. shiny applications require a specific structure and files in order to do the work of creating html without needing to write the full html and css. at it's most basic there are two main parts to the shiny application. the first defines the user interface (ui) of the page. it says what goes where, what kind of elements to include, and how things are labeled. this section defines what the user interacts with by creating inputs and also defining the layout of the output. the second part acts as a server that handles the computations and processing of the data that will be passed on to the ui for display. the two pieces work in tandem, passing information back and forth to create a visualization based on user input. using shiny allowed almost all of the time spent on creating the project to be concentrated on processing the data and creating the visualizations. the only difficulty in creating the frontend was making sure all the pieces of the ui and server were connected correctly. binder provided a solution for hosting the application, making the data available to users, and making it shareable all in an open environment. notebooks and applications hosted with binder are shareable in part because the source is often a repository like github. by passing a github repository to binder, say one that has a jupyter notebook in it, binder will build a docker image to run the notebook and then serve the result to the user without them needing to do anything. out of the box the docker image will contain only the most basic functions. the result is that if a notebook requires a library that isn't standard, it won't be possible to run all of the code in the notebook. in order to address this, binder allows for the inclusion in a repository of certain files that can define what extra elements should be included when building the docker image. this can be very specific such as what version of the language to use and listing various libraries that should be included to ensure that the notebook can be run smoothly. binder also has support for more advanced functionality in the docker images such as creating a postgres database and loading it with data. these kinds of activities require using different hooks that binder looks for during the creation of the docker image to run scripts. results and evaluation the final product has three main sections that divide the data categorically into altmetrics, bibliometrics, and data from mendeley. there are additionally some sections that exist as areas where something new could be tried out and refined without potentially causing issues with the three previously mentioned areas. each section has visualizations that are based on the data available. considering the requirements for the project, the result goes a long way to meeting the requirements. the most apparent area that the journal map succeeds in is its goals is of presenting data that we have collected. the application serves as a dashboard for the data that can be explored by changing filters and journal selections. by presenting the data as a dashboard, the barrier to entry for users to explore the data is low. however, there exists a way to access the data directly and perform new calculations, or create new visualizations. this can be done through the application's access to an r-studio environment. access to r-studio provides two major features. first, it gives direct access to the all the underlying code that creates the dashboard and the data used by it. second, it provides an r terminal so that users can work with the data directly. in r-studio, the user can also modify the existing files and then run them from r-studio to see the results. using binder and r as the backend of the applications allows us to provide users with different ways to access and work with data without any extra requirements on the part of the user. however, anything changed in r-studio won't affect the dashboard view and won't persist between sessions. changes exist only in the current session. all the major pieces of this project were able to be done using open technologies: binder to serve the application, r to write the code, and github to host all the code. using these technologies and leveraging their capabilities allows the project to support the open science paradigm that was part of the impetus for the project. the biggest drawback to the current implementation is that binder is a third party host and so there are certain things that are out of our control. for example, binder can be slow to load. it takes on average + minutes for the docker image to load. there's not much, if anything, we can do to speed that up. the other issue is that if there is an update to the binder source code that breaks something, then the application will be inaccessible until the issue is resolved. outlook and future work the application, in its current state, has parts that are not finalized. as we receive feedback, we will make changes to the application to add or change visualizations. as mentioned previously, there a few sections that were created to test different visualizations independently of the more complete sections, those can be finalized. in the future it may be possible to move from binderhub to a locally created and administered version of binder. there is support and documentation for creating local, self hosted instances of binder. going that direction would give more control, and may make it possible to get the docker image to load more quickly. while the application runs stand-alone, the data that is visualized may also be integrated in other contexts. one option we are already prototyping is integrating the data into our subject portal econbiz, so users would be able to judge the scientific impact of an article in terms of both bibliometric and altmetric indicators.   references william w. hood, concepcion s. wilson. the literature of bibliometrics, scientometrics, and informetrics. scientometrics , – springer science and business media llc, . link r. schimmer. disrupting the subscription journals’ business model for the necessary large-scale transformation to open access. ( ). link mike thelwall, stefanie haustein, vincent larivière, cassidy r. sugimoto. do altmetrics work? twitter and ten other social web services. plos one , e public library of science (plos), . link the zbw - open science future. link sarah anne murphy. data visualization and rapid analytics: applying tableau desktop to support library decision-making. journal of web librarianship , – informa uk limited, . link christina kläre, timo borst. statistic packages and their use in research in economics | edawax - blog of the project ’european data watch extended’. edawax - european data watch extended ( ). link   journal map - binder application for displaying and analyzing metrics data about scientific journals integrating altmetrics into a subject repository - econstor as a use case back in the zbw leibniz information center for economics (zbw) teamed up with the göttingen state and university library (sub), the service center of götting library federation (vzg) and gesis leibniz institute for the social sciences in the *metrics project funded by the german research foundation (dfg). the aim of the project was: “… to develop a deeper understanding of *metrics, especially in terms of their general significance and their perception amongst stakeholders.” (*metrics project about). in the practical part of the project the following dspace based repositories of the project partners participated as data sources for online publications and – in the case of econstor – also as implementer for the presentation of the social media signals: econstor - a subject repository for economics and business studies run by the zbw, currently (aug. ) containing round about , downloadable files, goescholar - the publication server of the georg-august-universität göttingen run by the sub göttingen, offering approximately , publicly browsable items so far, ssoar - the “social science open access repository” maintained by gesis, currently containing about , publicly available items. in the work package “technology analysis for the collection and provision of *metrics” of the project an analysis of currently available *metrics technologies and services had been performed. as stated by [wilsdon ], currently suppliers of altmetrics “remain too narrow (mainly considering research products with dois)”, which leads to problems to acquire *metrics data for repositories like econstor with working papers as the main content. as up to now it is unusual – at least in the social sciences and economics – to create dois for this kind of documents. only the resulting final article published in a journal will receive a doi. based on the findings in this work package, a test implementation of the *metrics crawler had been built. the crawler had been actively deployed from early to spring at the vzg. for the aggregation of the *metrics data the crawler had been fed with persistent identifiers and metadata from the aforementioned repositories. at this stage of the project, the project partners still had the expectation, that the persistent identifiers (e.g. handle, urns, …), or their local url counterparts, as used by the repositories could be harnessed to easily identify social media mentions of their documents, e.g. for econstor: handle: “hdl: /…” handle.net resolver url: “http(s)://hdl.handle.net/ /…” econstor landing page url with handle: “http(s)://www.econstor.eu/handle/ /…” econstor bitstream (pdf) url with handle: “http(s)://www.econstor.eu/bitstream/ /…” this resulted in two datasets: one for publications identified by dois (doi: .xxxx/yyyyy) or the respective metadata from crossref and one for documents identified by the repository urls (https://www.econstor.eu/handle/ /xxxx) or the items metadata stored in the repository. during the first part of the project several social media platforms had been identified as possible data sources for the implementation phase. this had been done by interviews and online surveys. for the resulting ranking see the social media registry. additional research examined which social media platforms are relevant to researchers at different stages of their career and if and how they use them (see: [lemke ], [lemke ] and [mehrazar ]). this list of possible sources for social media citations or mentions had then been further reduced to the following six social media platforms which are offering free and open available online apis: facebook mendeley reddit twitter wikipedia youtube of particular interest to the econstor team were the social media services mendeley and twitter, as those had been found being among the “top most used altmetric sources …” for economic and business studies (ebs) journals “… - with mendeley being the most complete platform for ebs journals” [nuredini ]. *metrics integration in econstor in early the econstor team finally received a mysql data dump of the compiled data which had been collected by the *metrics crawler. in consultations between the project partners and based on the aforementioned research, it became clear, that only the collected data from mendeley, twitter and wikipedia were suitable to be embedded into econstor. it was also made clear, by the vzg, that it had been nearly impossible to use handle or respective local urls to extract social media mentions from the free of charge provided apis of the different social media services. instead, in case of wikipedia isbns had been used and for mendeley the title and author(s) as provided in the repository’s metadata. only for the search via the twitter api the handle urls had been used. the datasets used by the *metrics crawler to identify works from econstor included a dataset of , dois (~ % of the econstor content back then), sometimes representing other manifestations of the documents stored in econstor (e.g. pre- or postprint versions of an article), their respective metadata from the crossref doi registry and also a dataset of , econstor documents identified by the handle/url and metadata stored in the repository itself. this second dataset also included the documents related to the publications identified by the doi set. the following table (table ) shows the results of the *metrics crawler for items in econstor. it displays one row for each service and the used identifier set. each row also shows the time period during which the crawler harvested the service and how many unique items per identifier set were found during that period. social media service (set) harvested from harvested until unique econstor items mentioned mendeley (doi) - - - - , mendeley (url) - - - - , twitter (doi) - - (date of first captured tweet - - ) - - (date of last captured tweet - - ) twitter (url) - - (date of first captured tweet - - ) - - (date of last captured tweet - - ) wikipedia (doi) - - - - wikipedia (url) - - - - table : unique econstor items found per identifier set and social media service the following table (table ) shows how many of the econstor items were found with identifiers from both sets. as you can see, only for the service mendeley the sets have a significant overlap. which shows, that it is desirable for a service such as econstor, to expand the captured coverage of its items in social media by the use of other identifies than just dois. social media site unique items identified by both doi and url mendeley , twitter wikipedia table : overlap in found identifiers as a result of the project, the landing pages of econstor items, which have been mentioned on mendeley, twitter or wikipedia during the time of data gathering, have now, for the time being, a listing of “social media mentions”. this is in addition to the already existing cites and citations, based on the repec - citec service and the download statistics, which is displayed on separate pages. image : “econstor item landing page” the back end on the econstor server is realized as a small restful web service programmed in java that returns json formatted data (see figure ). given a list of identifiers (dois/handle) it returns the sum of mentions for mendeley, twitter and wikipedia in the database, per specified econstor item, as well as the links to the counted tweets and wikipedia articles. in case of wikipedia this is also grouped by the language of the wikipedia the mention was found in.   { "_metrics": { "sum_mendeley": , "sum_twitter": , "sum_wikipedia": }, "identifier": " / ", "identifiertype": "handle", "repository": "econstor", "tweetdata": { " ": { "created_at": "wed dec : : + ", "description": "economist wettbewerb regulierung monopole economics @dicehhu @hhu_de vwl antitrust düsseldorf quakenbrück berlin fc st. pauli", "id_str": " ", "name": "justus haucap", "screen_name": "haucap" }, " ": { "created_at": "wed dec : : + ", "description": "twitterkanal des wirtschaftsdienst - zeitschrift für wirtschaftspolitik, hrsg. von @zbw_news; rt ≠ zustimmung; impressum: https://t.co/x gevzb lr", "id_str": " ", "name": "wirtschaftsdienst", "screen_name": "zeitschrift_wd" }, " ": { "created_at": "wed dec : : + ", "description": "professor for international economics at htw berlin - university of applied sciences; senior policy fellow at the european council on foreign relations", "id_str": " ", "name": "sebastian dullien", "screen_name": "sdullien" } }, "twitterids": [ " ", " ", " " ], "wikipediaquerys": {} } figure : “example json returned by webservice - twitter mentions”   image : “mendeley and twitter mentions” during the creation of the landing page of an econstor item (see image ), a java servlet queries the web service and, if some social media mentions is detected, renders the result into the web page. for each of the three social media platforms the sum of the mentions is displayed and for twitter and wikipedia even backlinks to the mentioning tweets/articles are provided as a drop-down list, below the number of mentions (see image ). in case of wikipedia this is also grouped by the languages of the articles in wikipedia in which the isbn of the corresponding work has been found. conclusion while being an interesting addition to the existing download statistics and citations by repec/citec, that are already integrated into econstor, currently the gathered “social media mentions” offer only a limited additional value to the econstor landing pages. one reason might be, that only a fraction of all the documents of econstor are covered. another reason might be according to [lemke ], that there is currently a great reluctance to use social media services among economists and social scientists, as it is perceived as: “unsuitable for academic discourse; … to cost much time; … separating personal from professional matters is bothersome; … increases the efforts necessary to handle information overload.” theoretically, the prospect of a tool for the measurement of the scientific uptake, with a quicker response time than classical bibliometrics, could be very rewarding, especially for a repository like econstor with its many preprints (e.g. working papers) provided in open access. as [thelwall ] has stated: “in response, some publishers have turned to altmetrics, which are counts of citations or mentions in specific social web services because they can appear more rapidly than citations. for example, it would be reasonable to expect a typical article to be most tweeted on its publication day and most blogged within a month of publication.” and “social media mentions, being available immediately after publication—and even before publication in the case of preprints…”. but especially these preprints, that come without a doi, are still a challenge to be correctly identified, and therefore to be counted as social media mentions. this is something the *metrics crawler has not changed, since it is using title and author metadata to search in mendeley, which does not give a % sure identification and isbns to search in wikipedia. even though a quick check revealed that at the time of writing this article (aug. ) at least wikipedia offers a handle search. a quick search for econstor handles in the english wikipedia returns now a list of pages with mentions of “hdl: /”, the german wikipedia - but these are still very small numbers (aug. nd, : currently , full texts are available in econstor). https://en.wikipedia.org/w/api.php?action=query&list=search&srlimit= &srsearch=% hdl: % f% &srwhat=text&srprop&srinfo=totalhits&srenablerewrites= &format=jsonsearch via api in english wikipedia another problem is, that at the time of this writing, the *metrics crawler is not continuously operated, therefore our analysis is based on a data dump of social media mentions from spring to early . since it is one of the major benefits of altmetrics that it can be obtained much faster and is more recent then classical citation-based metrics, it reduces the value of the continued integration of this static and continuously getting older dataset being integrated into econstor landing pages. hence, we are looking for more recent and regular updates of social media data that could serve as a ‘real-time’ basis for monitoring social media usage in economics. as a consequence, we are currently looking for: a) an institution to commit itself to run the *metrics crawler and b) a more active social media usage in the sciences of economics and business studies. references [lemke ] lemke, steffen; mehrazar, maryam; mazarakis, athanasios; peters, isabella ( ): are there different types of online research impact?, in: building & sustaining an ethical future with emerging technology. proceedings of the st annual meeting, vancouver, canada, – november , isbn - - - - , association for information science and technology (asis&t), silver spring, pp. - http://hdl.handle.net/ / [lemke ] lemke, steffen; mehrazar, maryam; mazarakis, athanasios; peters, isabella ( ): “when you use social media you are not working”: barriers for the use of metrics in social sciences, frontiers in research metrics and analytics, issn - , vol. , iss. [article] , pp. - , http://dx.doi.org/ . /frma. . [mehrazar ] maryam mehrazar, christoph carl kling, steffen lemke, athanasios mazarakis, and isabella peters ( ): can we count on social media metrics? first insights into the active scholarly use of social media, websci ’ : th acm conference on web science, may – , , amsterdam, netherlands. acm, new york, ny, usa, article , pages, https://doi.org/ . / . [metrics ] einbindung von *metrics in econstor, https://metrics-project.net/downloads/ - - -econstor-metrics-abschluss-ws-sub-g%c %b .pptx [nuredini ] nuredini, kaltrina; peters, isabella ( ): enriching the knowledge of altmetrics studies by exploring social media metrics for economic and business studies journals, proceedings of the st international conference on science and technology indicators (sti conference ), valència (spain), september - , , http://hdl.handle.net/ / [or ] relevance and challenges of altmetrics for repositories - answers from the *metrics project. https://www.conftool.net/or /index.php/paper-p a- orth% cweiland_b.pdf?page=downloadpaper&filename=paper-p a- orth% cweiland_b.pdf&form_id= &form_index= &form_version=final [social media registry] social media registry - current status of social media plattforms and *metrics, https://docs.google.com/spreadsheets/d/ oals kxtmml naf shxh ctmone q efhtzmgpinv /edit?usp=sharing [thelwall ] thelwall m, haustein s, larivie`re v, sugimoto cr ( ): do altmetrics work? twitter and ten other social web services. plos one ( ): e . http://dx.doi.org/ . /journal.pone. [wilsdon ] wilsdon, james et al. ( ): next-generation metrics: responsible metrics and evaluation for open science. report of the european commission expert group on altmetrics, isbn - - - - , http://dx.doi.org/ . / integrating altmetrics data into econstor th century press archives: data donation to wikidata zbw is donating a large open dataset from the th century press archives to wikidata, in order to make it better accessible to various scientific disciplines such as contemporary, economic and business history, media and information science, to journalists, teachers, students, and the general public. the th century press archives (pm ) is a large public newspaper clippings archive, extracted from more than different sources published in germany and all over the world, covering roughly a full century ( - ). the clippings are organized in thematic folders about persons, companies and institutions, general subjects, and wares. during a project originally funded by the german research foundation (dfg), the material up to has been digitized. , folders with more than two million pages up to are freely accessible online.  the fine-grained thematic access and the public nature of the archives makes it to our best knowledge unique across the world (more information on wikipedia) and an essential research data fund for some of the disciplines mentioned above. the data donation does not only mean that zbw has assigned a cc license to all pm metadata, which makes it compatible with wikidata. (due to intellectual property rights, only the metadata can be licensed by zbw - all legal rights on the press articles themselves remain with their original creators.) the donation also includes investing a substantial amount of working time (during, as planned, two years) devoted to the integration of this data into wikidata. here we want to share our experiences regarding the integration of the persons archive metadata. folders from the persons archive, in (credit: max-michael wannags) linking our folders to wikidatathe essential bit for linking the digitized folders was in place before the project even started: an external identifier property (pm folder id, p ), proposed by an administrator of the german wikipedia in order to link to pm person and company folders. we participated in the property proposal discussion and made sure that the links did not have to reference our legacy coldfusion application. instead, we created a "partial redirect" on the purl.org service (maintained formerly by oclc, now by the internet archive) for persistent urls which may redirect to another application on another server in future. secondly, the identifier and url format was extended to include subject and ware folders, which are defined by a combination of two keys, one for the country and another for the topic. the format of the links in wikidata is controlled by a regular expression, which covers all four archives mentioned above. that works pretty well -  very few format errors occurred so far -, and it relieved us from creating four different archive-specific properties.shortly after the property creation, magnus manske, the author of the original mediawiki software and lots of related tools, scraped our web site and created a mix-n-match catalog from it. during the following two years, more than wikidata users contributed to matching wikidata items for humans to pm folder ids. for a start, deriving links from gnd many of the pm person and company folders were already identified by an identifier from the german integrated authority file (gnd). so, our first step was creating pm links for all wikidata items which had matching gnd ids. for all these items and folders, disambiguation had already taken place, and we could safely add all these links automatically. infrastructure: pm endpoint, federated queries and quickstatements to make this work, we relied heavily on linked data technologies. a pm sparql endpoint had already been set up for our contribution to coding da vinci (a "kultur-hackathon" in germany). almost all automated changes to wikidata we made are based on federated queries on our own endpoint, reaching out to the wikidata endpoint, or vice versa, from wikidata to pm . in the latter case, the external endpoint has to be registered at wikidata. wikidata maintains a help page for this type of queries. for our purposes, federated queries allow extracting current data from both endpoints. in the case of the above-mentioned missing_pm _id_via_gnd.rq query, this way we can skip all items, where a link to pm already exists. within the query itself, we create a statement string which we can feed into the quickstatements tool. that includes, for every single statement, a reference to pm with link to the actual folder, so that the provenance of these statements is always clear and traceable. via script, a statement file is extracted and saved with a timestamp. data imports via quickstatements are executed in batch mode, and an activity log keeps track of all data imports and other activities related to pm . creating missing items after the matching of about % of the person folders which include free documents in mix-n-match, and some efforts to discover more pre-existing wikidata items, we decided to create the missing person items, again via quickstatements input. we used the description field in wikidata by importing the content of the free-text "occupation" field in pm for better disambiguation of the newly created items. (here a rather minimal example of such an item created from pm metadata.) thus, all pm person folders which have digitized content were linked to wikidata in june . supplementing wikidata with pm metadata a second part of the integration of pm metadata into wikidata was the import of missing property values to the according items. this comprised simple facts like "date of birth/death", occupations such as "economist", "business economist", "social scientist", "earth scientist", which we could derive from the "field of activity" in pm , up to relations between existing items, e.g. a family member to the according family, or a board member to the according company. a few other source properties have been postponed, because alternative solutions exist, and the best one may depend on the intended use in future applications. the steps of this enrichment process and links to the code used - including the automatic generation of references - are online, too. complex statement added to wikidata item for friedrich krupp ag again, we used federated queries. often the target of a wikidata property is an item in itself. sometimes, we could directly get this via the target item's pm folder id (families, companies); sometimes we had to create lookup tables. for the latter, we used "values" clauses in the query (in case of "occupation"), or (in case of "country of citizenship"), we have to match countries from our internal classification in advance - a process for which we use openrefine. other than pm folder ids, which we avoided adding when folders do not contain digitized content, we added the metadata to all items which were linked to pm , and intend to repeat this process periodically when more items (e.g., companies) are identified by pm folder ids. in some housekeeping activity, we also add periodically the numbers of documents (online and total) and the exact folder names as qualifiers to newly emerging pm links in items. results of the data donation so far with all persons folder with digitized documents linked to wikidata, the data donation of the person folders metadata is completed. besides the folder links, which have already heavily been used to create links in wikipedia articles, we have got - more than statements which are sourced in pm (from "date of birth" to the track gauge of a brazilian railway line) - more than items, for which pm id is the only external identifier the data donation will be presented on the wikidatacon in berlin ( .- . . ) as a "birthday present" on the occasion wikidata's seventh birthday. zbw will further keep the digital content available, amended with a static landing page for every folder, which also will serve as source link for the metadata we have integrated into wikidata. but in future, wikidata will be the primary access path to our data, providing further metadata in multiple languages and links to a plethora of other external sources. and the best is, different from our current application, everybody will be able to enhance this open data through the interactive tools and data interfaces provided by wikidata.participate in wikiproject th century press archives for the topics, wares and companies archives, there is still a long way to go. the best structure for representing these archives and their folders - often defined by the combination of a country within a geographical hierarchy with a subject heading in a deeply nested topic classification -, has to be figured out. existing items have to be matched, and lots of other work is to be done. therefore, we have created the wikiproject th century press archives in wikidata to keep track of discussions and decisions, and to create a focal point for participation. everybody on wikidata is invited to participate - or just kibitz. it could be challenging particularly for information scientists, and people interested in historic systems for the organization of knowledge about the whole world, to take part in the mapping of one of these systems to the emerging wikidata knowledge graph.   linked data &# ; open data &# ; zbw's contribution to "coding da vinci": dossiers about persons and companies from th century press archives at th and th of october, the kick-off for the "kultur-hackathon" coding da vinci is held in mainz, germany, organized this time by glam institutions from the rhein-main area: "for five weeks, devoted fans of culture and hacking alike will prototype, code and design to make open cultural data come alive." new software applications are enabled by free and open data. for the first time, zbw is among the data providers. it contributes the person and company dossiers of the th century press archive. for about a hundred years, the predecessor organizations of zbw in kiel and hamburg had collected press clippings, business reports and other material about a wide range of political, economic and social topics, about persons, organizations, wares, events and general subjects. during a project funded by the german research organization (dfg), the documents published up to (about , million pages) had been digitized and are made publicly accessible with according metadata, until recently solely in the "pressemappe . jahrhundert" (pm ) web application. additionally, the dossiers - for example about mahatma gandhi or the hamburg-bremer afrika linie - can be loaded into a web viewer. as a first step to open up this unique source of data for various communities, zbw has decided to put the complete pm metadata* under a cc-zero license, which allows free reuse in all contexts. for our coding da vinci contribution, we have prepared all person and company dossiers which already contain documents. the dossiers are interlinked among each other. controlled vocabularies (for, e.g., "country", or "field of activity") provide multi-dimensional access to the data. most of the persons and a good share of organizations were linked to gnd identifiers. as a starter, we had mapped dossiers to wikidata according to existing gnd ids. that allows to run queries for pm dossiers completely on wikidata, making use of all the good stuff there. an example query shows the birth places of pm economists on a map, enriched with images from wikimedia commons. the initial mapping was much extended by fantastic semi-automatic and manual mapping efforts by the wikidata community. so currently more than % of the dossiers about - often rather prominent - pm persons are linked not only to wikidata, but also connected to wikipedia pages. that offers great opportunities for mash-ups to further data sources, and we are looking forward to what the "coding da vinci" crowd may make out of these opportunities. technically, the data has been converted from an internal intermediate format to still quite experimental rdf and loaded into a sparql endpoint. there it was enriched with data from wikidata and extracted with a construct query. we have decided to transform it to json-ld for publication (following practices recommended by our hbz colleagues). so developers can use the data as "plain old json", with the plethora of web tools available for this, while linked data enthusiasts can utilize sophisticated semantic web tools by applying the provided json-ld context. in order to make the dataset discoverable and reusable for future research, we published it persistently at zenodo.org. with it, we provide examples and data documentation. a github repository gives you additional code examples and a way to address issues and suggestions. * for the scanned documents, the legal regulations apply - zbw cannot assign licenses here.     pressemappe . jahrhundert linked data &# ; wikidata as authority linking hub: connecting repec and gnd researcher identifiers in the econbiz portal for publications in economics, we have data from different sources. in some of these sources, most notably zbw's "econis" bibliographical database, authors are disambiguated by identifiers of the integrated authority file (gnd) - in total more than , . data stemming from "research papers in economics" (repec) contains another identifier: repec authors can register themselves in the repec author service (ras), and claim their papers. this data is used for various rankings of authors and, indirectly, of institutions in economics, which provides a big incentive for authors - about , have signed into ras - to keep both their article claims and personal data up-to-date. while gnd is well known and linked to many other authorities, ras had no links to any other researcher identifier system. thus, until recently, the author identifiers were disconnected, which precludes the possibility to display all publications of an author on a portal page. to overcome that limitation, colleagues at zbw have matched a good , authors with ras and gnd ids by their publications (see details here). making that pre-existing mapping maintainable and extensible however would have meant to set up some custom editing interface, would have required storage and operating resources and wouldn't easily have been made publicly accessible. in a previous article, we described the opportunities offered by wikidata. now we made use of it. v\:* {behavior:url(#default#vml);} o\:* {behavior:url(#default#vml);} w\:* {behavior:url(#default#vml);} .shape {behavior:url(#default#vml);} normal false false false false de x-none x-none defsemihidden="true" defqformat="false" defpriority=" " latentstylecount=" "> unhidewhenused="false" qformat="true" name="normal"> unhidewhenused="false" qformat="true" name="heading "> unhidewhenused="false" qformat="true" name="title"> unhidewhenused="false" qformat="true" name="subtitle"> unhidewhenused="false" qformat="true" name="strong"> unhidewhenused="false" qformat="true" name="emphasis"> unhidewhenused="false" name="table grid"> unhidewhenused="false" qformat="true" name="no spacing"> unhidewhenused="false" name="light shading"> unhidewhenused="false" name="light list"> unhidewhenused="false" name="light grid"> unhidewhenused="false" name="medium shading "> unhidewhenused="false" name="medium shading "> unhidewhenused="false" name="medium list "> unhidewhenused="false" name="medium list "> unhidewhenused="false" name="medium grid "> unhidewhenused="false" name="medium grid "> unhidewhenused="false" name="medium grid "> unhidewhenused="false" name="dark list"> unhidewhenused="false" name="colorful shading"> unhidewhenused="false" name="colorful list"> unhidewhenused="false" name="colorful grid"> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" qformat="true" name="list paragraph"> unhidewhenused="false" qformat="true" name="quote"> unhidewhenused="false" qformat="true" name="intense quote"> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" name="light shading accent "> unhidewhenused="false" name="light list accent "> unhidewhenused="false" name="light grid accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium shading accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium list accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="medium grid accent "> unhidewhenused="false" name="dark list accent "> unhidewhenused="false" name="colorful shading accent "> unhidewhenused="false" name="colorful list accent "> unhidewhenused="false" name="colorful grid accent "> unhidewhenused="false" qformat="true" name="subtle emphasis"> unhidewhenused="false" qformat="true" name="intense emphasis"> unhidewhenused="false" qformat="true" name="subtle reference"> unhidewhenused="false" qformat="true" name="intense reference"> unhidewhenused="false" qformat="true" name="book title"> /* style definitions */ table.msonormaltable {mso-style-name:"normale tabelle"; mso-tstyle-rowband-size: ; mso-tstyle-colband-size: ; mso-style-noshow:yes; mso-style-priority: ; mso-style-parent:""; mso-padding-alt: cm . pt cm . pt; mso-para-margin: cm; mso-para-margin-bottom:. pt; line-height: . pt; mso-pagination:widow-orphan; font-size: . pt; font-family:"times new roman","serif";} initial situation in wikidata economists were, at the start of this small project in april , already well represented among the . million persons in wikidata - though the precise extent is difficult to estimate. furthermore, properties for linking gnd and repec author identifiers to wikidata items were already in place: p “gnd id”, in ~ , items p “repec short-id” (further-on: ras id), in ~ , items both properties in ~ items for both properties, “single value” and “distinct values” constraints are defined, so that (with rare exceptions) a : relation between the authority entry and the wikidata item should exist. that, in turn, means that a : relation between both authority entries can be assumed. the relative amounts of ids in econbiz and wikidata is illustrated by the following image. person identifiers in wikidata and econbiz, with unknown overlap at the beginning of the project (the number of . million persons in econbiz is a very rough estimate, because most names – outside gnd and ras – are not disambiguated) since many economists have wikipedia pages, from which wikidata items have been created routinely, the first task was finding these items and adding gnd and/or ras identifiers to them. the second task was adding items for persons which did not already exist in wikidata. adding mapping-derived identifiers to wikidata items for items already identified by either gnd or ras, the reciprocal identifiers where added automatically: a federated sparql query on the mapping and the public wikidata endpoint retrieved the items and the missing ids. a script transformed that into input for wikidata’s quickstatements tool, which allows adding statements (as well as new items) to wikidata. the tool takes csv-formatted input via a web form and applies it in batch to the live dataset. import statements for quickstatements . the first input line adds the ras id “pan ” to the item for the economist james andreoni. the rest of the input line creates a reference to zbws mapping for this statement and so allows tracking its provenance in wikidata. that step resulted in added gnd ids to items identified by ras id, and, in the reverse direction, added ras ids to items identified by gnd id. for the future, it is expected that tools like wdmapper will facilitate such operations. identifying more wikidata items obviously, the previous step left out the already existing economists in wikidata, which up to then had neither a gnd nor a ras id. therefore, these items had to be identified by adding one of the identifiers. a semi-automatic approach was applied to that end, starting with the “most important” persons from repec and econbiz datasets. that was extended in an automatic step, taking advantage of existing viaf identifiers (a step which could have been also the first one). for repec, the “top economists” ranking page (~ , authors) was scraped and cross-linked to a custom-created basic rdf dataset of the repec authors. the result was transformed to an input file for wikidata’s mix’n’match tool, which had been developed for the alignment of external catalogs with wikidata. the tool takes a simple csv file, consisting of a name, a description and an identifier, and tries to automatically match against wikidata labels. in a subsequent interactive step, it allows to confirm or remove every match. if confirmed, the identifier is automatically added as value to the according property of the matched wikidata item. for gnd, all authors with more than publications in econbiz where selected in a custom sparql endpoint. just as the “repec top” matchset, a “gnd economists (de)” matchset with ~ , gnd ids, names and descriptions was loaded into mix’n’match and aligned to wikidata. becoming more familiar with the wikidata-related tools, policies and procedures, existing viaf property values were exploited as another opportunity for seeding gnd ids in wikidata. in a federated sparql query on a custom viaf and the public wikidata endpoint, about , missing gnd ids were determined and added to wikidata items which had been identified by viaf id. after each of these steps, the first task – adding mapping-derived gnd or ras identifiers – was repeated. that resulted in wikidata items carrying both ids. since zbws author mapping based on at least matching publications, the alignment of high-frequency resp. highly-ranked gnd and repec authors made it highly probable that authors already present in wikidata were identified in the previous steps. that reduced the danger of creating duplicates in the following task. creating new wikidata items from the mapped authorities for the rest of the authors in the mapping, new wikidata items were created. this task was carried out again by the quickstatements tool, for which the input statements were created by a script, based on a sparql query on the afore-mentioned endpoints for repec authors and gnd entries. the input statements were derived from both authorities, in the following fashion: the label (name of the person) was taken from gnd the occupation “economist” was derived from repec (and in particular from the occurrence in its “top economists” list) gender and date of birth/death were taken from gnd (if available) the english description was a concatenated string “economist” plus the affiliations from repec the german description was a concatenated string “wirtschaftswissenschaftler/in” plus the affiliations from gnd the use of wikidata’s description field for affiliations was a makeshift: in the absence of an existing mapping of repec (and mostly also gnd) organizations to wikidata, it allows for better identification of the individual researchers. in a later step, when according organization/institute items exist in wikidata and mappings are in place, the items for authors can be supplemented step-by-step by formal “affiliation” (p ) statements. according to wikidata’s policy, an extensive reference to the source for each statement in the synthesized new wikidata item was added. the creation of items in an automated fashion involves the danger of duplicates. however, such duplicates turned up only in very few cases. they have been solved by merging items, which technically is very easy in wikidata. interestingly, a number of “fake duplicates” indeed revealed multifarious quality issues, in wikidata and in both of the authority files, which, too, have been subsequently resolved. ... and even more new items for economists ... the good experiences so far let us get bolder, and we considered creating wikidata items for the still missing "top economists" (according to repec). for item creation, one aspect we had to consider was the compliance with wikidata's notability policy. this policy is much more relaxed than the policies of the large wikipedias. it states as one criterion sufficient for item creation that the item "refers to an instance of a clearly identifiable conceptual or material entity. the entity must be notable, in the sense that it can be described using serious and publicly available references." there seems to be some consensus in the community that authority files such as gnd or repec authors count as "serious and publicly available references". this of course should hold even more for a bibliometric ranked subset of these external identifiers. we thus inserted another , wikidata items for the rest of the repec top % list. additionally - to mitigate the immanent gender bias such selections often bear - we imported all missing researchers from repec's "top % female economists" list. again, we added reference statements to repec which allow wikidata users to keep track of the source of the information. results the immediate result of the project was: all of the pairs of identifiers from the initial mapping by zbw is incorporated now in wikidata items wikidata items in addition to these also have both identifiers (created by individual wikidata editors, or the efforts described above) (all numbers in this section as of - - .) while that still is only a beginning, given the total amount of authors represented in econbiz, it is a significant share of the “most important” ones: top % ras and frequent gnd in econbiz (> publications). “wikidata economists” is a rough estimate of the amount of persons in the field of economics (twice the number of those with the explicit occupation “economist”) while the top repec economists are now completely covered by wikidata, for gnd the overlap has been improved significantly during the last year. this occured in parts as a side-effect of the efforts described above, in parts it is caused by the genuine growth of wikidata in regard to the number of items as well as the increasing density of external identifiers. here the current percentages, compared to those one year earlier, which were presented in our previous article: large improvements in the coverage of the most frequent authors by wikidata (query, result) while the improvements in absolute numbers are impressive, too - the number of gnd ids for all econbiz persons (with at least one publication) has increased from , to , - the image demonstrates that particularly the coverage for our most frequent authors has risen largely. the addition of all repec top economists has created further opportunities for matching these items from the afore-mentioned gnd mix-n-match set, which will again will add up to the mapping. all matching and duplicates checking done, we may re-consider the option of adding the remaining frequent gnd persons (> publications in econbiz) automatically to wikidata. the mapping data can be retrieved by everyone, via sparql queries, by specialized tools such as wdmapper, or as part of the wikidata dumps. what is more, it can be extended by everybody – either as a by-product of individual edits adding identifiers to persons in wikidata, or by a directed approach. for directed extensions, any subset can be used as a starting point: either a new version of the above mentioned ranking, or other rankings also published by repec, covering in particular female, or economists from e.g. latin america; or all identifiers from a particular institution, either derived from gnd or ras. the results of all such efforts are available at once and add up continuously. yet, the benefits of using wikidata cannot be reduced to the publication and maintenance of mapping itself. in many cases it offers much more than just a linking point for two identifiers: links to wikipedia pages about the authors, possibly in multiple languages rich data about the authors in defined formats, sometimes with explicit provenance information access to pictures etc. from wikimedia commons, or quotations from wikiquote links to multiple other authorities as an example for the latter, the in total ras identifiers in wikidata are already mapped to viaf and loc authority ids (while orcid with ids is still remarkably low). at the same time, these repec-connected items were linked to english, german and  spanish wikipedia pages which provide rich human-readable information. in turn, when we take the gnd persons in econbiz as a starting point, roughly , are already represented in wikidata. besides large amounts of other identifiers, the according wikidata items offer more than , links to german and more than , links to english wikipedia pages (query). for zbw, “releasing” the dataset into wikidata as a trustworthy and sustainable public database not only saves the “technical” costs of data ownership (programming, storage, operating, for access and for maintenance). responsibility for - and fun from - extending, amending and keeping the dataset current can be shared with many other interested parties and individuals.   wikidata for authorities authority control &# ; wikidata &# ; deutsch new version of multi-lingual jel classification published in lod the journal of economic literature classification scheme (jel) was created and is maintained by the american economic association. the aea provides this widely used resource freely for scholarly purposes. thanks to andré davids (ku leuven), who has translated the originally english-only labels of the classification to french, spanish and german, we provide a multi-lingual version of jel. it's lastest version (as of - ) is published in the formats rdfa and rdf download files. these formats and translations are provided "as is" and are not authorized by aea. in order to make changes in jel tracable more easily, we have created lists of inserted and removed jel classes in the context of the skos-history project. jel klassifikation für linked open dataskos-history linked data &# ; economists in wikidata: opportunities of authority linking wikidata is a large database, which connects all of the roughly wikipedia projects. besides interlinking all wikipedia pages in different languages about a specific item – e.g., a person -, it also connects to more than different sources of authority information. the linking is achieved by a „authority control“ class of wikidata properties. the values of these properties are identifiers, which unambiguously identify the wikidata item in external, web-accessible databases. the property definitions includes an uri pattern (called „formatter url“). when the identifier value is inserted into the uri pattern, the resulting uri can be used to look up the authoritiy entry. the resulting uri may point to a linked data resource - as it is the case with the gnd id property. this, on the one hand, provides a light-weight and robust mechanism to create links in the web of data. on the other hand, these links can be exploited by every application which is driven by one of the authorities to provide additional data: links to wikipedia pages in multiple languages, images, life data, nationality and affiliations of the according persons, and much more. wikidata item for the indian economist bina agarwal, visualized via the sqid browser in , a group of students under the guidance of jakob voß published a handbook on "normdaten in wikidata" (in german), describing the structures and the practical editing capabilities of the the standard wikidata user interface. the experiment described here focuses on persons from the subject domain of economics. it uses the authority identifiers of the about , economists referenced by their gnd id as creators, contributors or subjects of books, articles and working papers in zbw's economics search portal econbiz. these gnd ids were obtained from a prototype of the upcoming econbiz research dataset (ebds). to , of these persons, or . %, a person in wikidata is connected by gnd. if we consider the frequent (more than publications) and the very frequent (more than publications) authors in econbiz, the coverage increases significantly: economics-related persons in econbiz number of publications total in wikidata percentage datasets: ebds as of - - ; wikidata as of - - (query, result) > , , . % > , , . % > , . % these are numbers "out of the box" - ready-made opportunities to link out from existing metadata in econbiz and to enrich user interfaces with biographical data from wikidata/wikipedia, without any additional effort to improve the coverage on either the econbiz or the wikidata side. however: we can safely assume that many of the econbiz authors, particularly of the high-frequency authors, and even more of the persons who are subject of publications, are "notable" according the wikidata notablitiy guidelines. probably, their items exist and are just missing the according gnd property. to check this assumption, we take a closer look to the wikidata persons which have the occupation "economist" (most wikidata properties accept other wikidata items - instead of arbitrary strings - as values, which allows for exact queries and is indispensible in a multilingual environment).  of these approximately , persons, less than % have a gnd id property! even if we restrict that to the , "internationally recognized economists" (which we define here as having wikipedia pages in three or more different languages), almost half of them lack a gnd id property. when we compare that with the coverage by viaf ids, more than % of all and % the internationally recognized wikidata economists are linked to viaf (sparql lab live query). therefore, for a whole lot of the persons we have looked at here, we can take it for granted the person exists in wikidata as well as in the gnd, and the only reason for the lack of a gnd id is that nobody has added it to wikidata yet. as an aside: the information about the occupation of persons is to be taken as a very rough approximation: some wikidata persons were economists by education or at some point of their career, but are famous now for other reasons (examples include vladimir putin or the president of liberia, ellen johnson sirleaf). on the other hand, econbiz authors known to wikidata are often qualified not as economist, but as university teacher, politican, historican or sociologist. nevertheless, their work was deemed relevant for the broad field of economics, and the conclusions drawn at the "economists" in wikidata and gnd will hold for them, too: there are lots of opportunities for linking already well defined items. what can we gain? the screenshot above demonstrates, that not only data about the person itself, her affiliations, awards received, and possibly many other details can be obtained. the "identifiers" box on the bottom right shows authoritiy entries. besides the gnd id, which served as an entry point for us, there are links to viaf and other national libraries' authorities, but also to non-library identifier systems like isni and orcid. in total, wikidata comprises more than million authority links, more than millions of these for persons. when we take a closer look at the , econbiz persons which we can look up by their gnd id in wikidata, an astonishing variety of authorities is addressed from there: different authorities are linked from the subset, ranging from "almost complete" (viaf, library of congress name authority file) to - in the given context- quite exotic authorities of, e.g., members of the belgian senate, chess players or swedish olympic committee athletes. some of these entries link to carefully crafted biographies, sometimes behind a paywall  (notable names database, oxford dictionary of national biography, munzinger archiv, sächsische biographie, dizionario biografico degli italiani), or to free text resources (project gutenberg authors). links to the world of museums and archives are also provided, from the getty union list of artist names to specific links into the british museum or the musée d'orsay collections. a particular use can be made of properties which express the prominence of the according persons: nobel prize ids, for example, definitivly should be linked to according gnd ids (and indeed, they are). but also ted speakers or persons with an entry in the munzinger archive (a famous and long-established german biographical service) are assumed to have gnd ids. that opens a road to a very focused improvement of the data quality: a list of persons with that properties, restricted to the subject field (e.g., "occupation economist"), can be easily generated from wikidata's sparql query service. in wikidata, it is very easy to add the missing id entries discovered during such cross-checks interactively. and if it turns out that an "very important" person from the field is missing from the gnd at all, that is a all-the-more valuable opportunity to improve the data quality at the source. how can we start improving? as a prove of concept, and as a practical starting point, we have developed a micro-application for adding missing authority property values. it consists of two sparql lab scripts: missing_property creates a list of wikidata persons, which have a certain authority property (by default: ted speaker id) and lacks another one (by default: gnd id). for each entry in the list, a link to an application is created, which looks up the name in the according authority file (by default: search_person, for a broad yet ranked full-text search of person names in gnd). if we can identify the person in the gnd list, we can copy its gnd id, return to the first one, click on the link to the wikidata item of the person and add the property value manually through wikidata's standard edit interface. (wikidata is open and welcoming such contributions!) it takes effect within a few seconds - when we reload the missing_property list, the improved item should not show up any more. instead of identifying the most prominent economics-related persons in wikidata, the other way works too: while most of the gnd-identified persons are related to only one or twe works, as an according statistics show, few are related to a disproportionate amount of publications. of the , persons related to more than publications, less than are missing links to wikidata by their gnd id. by adding this property (for the vast majority of these persons, a wikidata item should already exist), we could enrich, at a rough estimate, more than , person links in econbiz publications. another micro-application demonstrates, how the work could be organized: the list of econbiz persons by descending publication count provides "search in wikidata" links (functional on a custom endpoint): each link triggers a query which looks up all name variants in gnd and executes a search for these names in a full-text indexed wikidata set, bringing up an according ranked list of suggestions (example with the gnd id of john h. dunning). again, the gnd id can be added - manually but straightforward - to an identified wikidata item. while we can not expect to reduce the quantitative gap between the , persons in econbiz and the , of them linked to wikidata significantly by such manual efforts, we surely can step-by-step improve for the most prominent persons. this empowers applications to show biographical background links to wikipedia where our users expect them most probably. other tools for creating authority links and more automated approaches will be covered in further blog posts. and the great thing about wikidata is: all efforts add up - while we are doing modest improvements in our field of interest, many others do the same, so wikidata already features an impressive overall amont of authority links. ps. all queries used in this analysis are published at github. the public wikidata endpoint cannot be used for research involving large datasets due to its limitations (in particular the second timeout, the preclusion of the "service" clause for federated queries, and the lack of full-text search). therefore, we’ve loaded the wikidata dataset (along with others) into custom apache fuseki endpoints on a performant machine. even there, a „power query“ like the one on the number of all authority links in wikidata takes about minutes. therefore, we publish the according result files in the github repository alongside with the queries. wikidata for authorities wikidata &# ; authority control &# ; linked data &# ; integrating a research data repository with established research practices authors: timo borst, konstantin ott in recent years, repositories for managing research data have emerged, which are supposed to help researchers to upload, describe, distribute and share their data. to promote and foster the distribution of research data in the light of paradigms like open science and open access, these repositories are normally implemented and hosted as stand-alone applications, meaning that they offer a web interface for manually uploading the data, and a presentation interface for browsing, searching and accessing the data. sometimes, the first component (interface for uploading the data) is substituted or complemented by a submission interface from another application. e.g., in dataverse or in ckan data is submitted from remote third-party applications by means of data deposit apis [ ]. however the upload of data is organized and eventually embedded into a publishing framework (data either as a supplement of a journal article, or as a stand-alone research output subject to review and release as part of a ‘data journal’), it definitely means that this data is supposed to be made publicly available, which is often reflected by policies and guidelines for data deposit. in clear contrast to this publishing model, the vast majority of current research data however is not supposed to be published, at least in terms of scientific publications. several studies and surveys on research data management indicate that at least in the social sciences there is a strong tendency and practice to process and share data amongst peers in a local and protected environment (often with several local copies on different personal devices), before eventually uploading and disseminating derivatives from this data to a publicly accessible repository. e.g., according to a survey among austrian researchers, the portion of researchers agreeing to share their data either on request or among colleagues is % resp. %, while the agreement to share on a disciplinary repository is only % [ ]. and in another survey among researchers from a local university and cooperation partner, almost % preferred an institutional local archive, while only % agreed on a national or international archive. even if there is data planned to be published via a publicly accessible repository, it will first be stored and processed in a protected environment, carefully shared with peers (project members, institutional colleagues, sponsors) and often subject to access restrictions – in other words, it is used before being published.with this situation in mind, we designed and developed a central research data repository as part of a funded project called ‘sowidatanet’ (sdn - network of data from social sciences and economics) [ ]. the overall goal of the project is to develop and establish a national web infrastructure for archiving and managing research data in the social sciences, particularly quantitative (statistical) data from surveys. it aims at smaller institutional research groups or teams, which often do lack an institutional support or infrastructure for managing their research data. as a front-end application, the repository based on dspace software provides a typical web interface for browsing, searching and accessing the content. as a back-end application, it provides typical forms for capturing metadata and bitstreams, with some enhancements regarding the integration of authority control by means of external webservices. from the point of view of the participating research institutions, a central requirement is the development of a local view (‘showcase’) on the repository’s data, so that this view can be smoothly integrated into the website of the institution. the web interface of the view is generated by means of the play framework in combination with the bootstrap framework for generating the layout, while all of the data is retrieved and requested from the dspace backend via its discover interface and rest-api. sdn architecturediagram: sowidatanet software componentsthe purpose of the showcase application is to provide an institutional subset and view of the central repository’s data, which can easily be integrated into any institutional website, either as an iframe to be embedded by the institution (which might be considered as an easy rather than a satisfactory technical solution), or as a stand-alone subpage being linked from the institution’s homepage, optionally using a proxy server for preserving the institutional domain namespace. while these solutions imply the standard way of hosting the showcase software, a third approach suggests the deployment of the showcase software on an institution’s server for customizing the application. in this case, every institution can modify the layout of their institutional view by customizing their institutional css file. because using bootstrap and less compiling the css file, a lightweight possibility might be to modify only some less variables compiling to an institutional css file.as a result from the requirement analysis conducted with the project partners (two research institutes from the social sciences), and in accordance with the survey results cited, there is a strong demand for managing not only data which is to be published in the central repository, but also data which is protected and circulating only among the members of the institution. moreover, this data is described by additional specific metadata containing internal hints on the availability restrictions and access conditions. hence, we had to distinguish between the following two basic use cases to be covered by the showcase: to provide a view on the public sdn data (‘data published’) to provide a view on the public sdn data plus the internal institutional data resp. their corresponding metadata records, the latter only visible and accessible for institutional members (‘data in use’) from the perspective of a research institution and data provider, the second use case turned out to be the primary one, since it covers more the institutional practices and workflows than the publishing model does. as a matter of fact, research data is primarily generated, processed and shared in a protected environment, before it may eventually be published and distributed to a wider, potentially abstract and unknown community – and this fact must be acknowledged and reflected by a central research data repository aiming at the contributions from researchers which are bound to an institution.if ‘data in use’ is to be integrated into the showcase as an internal view on protected data to be shared only within an institution, it means to restrict the access to this data on different levels. first, for every community (in the sense of an institution), we introduce a dspace collection for just those internal data, and protect it by assigning it to a dspace user role ‘internal[community_name]’. this role is associated with an ip range, so that only requests from that range will be assigned to the role ‘internal’ and granted access to the internal collection. in the context of our project, we enter only the ip of the showcase application, so that every user of this application will see the protected items. depending on the locality of the showcase application resp. server, we have to take further steps: if the application resp. server is located in the institution’s intranet, the protected items are only visible and accessible from the institution’s network. if the application is externally hosted and accessible via the world wide web – which is expected to be the default solution for most of the research institutes –, then the showcase application needs an authentication procedure, which is preferably realized by means of the central dspace sowidatanet repository, so that every user of the showcase application is granted access by becoming a dspace user.in the context of an r&d project where we are partnering with research institutes, it turned out that the management of research data is twofold: while repository providers are focused on the publishing and unrestricted access to research data, researchers are mainly interested in local archiving and sharing of their data. in order to manage this data, the researchers’ institutional practices need to be reflected and supported. for this purpose, we developed an additional viewing and access component. when it comes to their integration with existing institutional research practices and workflows, the implementation of research data repositories requires concepts and actions which go far beyond the original idea of a central publishing platform. further research and development is planned in order to understand and support better the sharing of data in both institutional and cross-institutional subgroups, so the integration with a public central repository will be fostered.link to prototype references[ ] dataverse deposit-api. retrieved may , from http://guides.dataverse.org/en/ . . /dataverse-api-main.html#data-deposit-api[ ] forschende und ihre daten. ergebnisse einer österreichweiten befragung – report . version . - zenodo. ( ). retrieved may , from https://zenodo.org/record/ #.vrhmkea pmm[ ] project homepage: https://sowidatanet.de/. retrieved may .[ ] research data management survey: report - nottingham eprints. ( ). retrieved may , from http://eprints.nottingham.ac.uk/ /[ ] university of oxford research data management survey  : the results | damaro. ( ). retrieved may , from https://blogs.it.ox.ac.uk/damaro/ / / /university-of-oxford-research-data-management-survey- -the-results/ institutional view on research data dshr's blog: stablecoins part dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, august , stablecoins part i wrote stablecoins about tether and its "magic money pump" seven months ago. a lot has happened and a lot has been written about it since, and some of it explores aspects i didn't understand at the time, so below the fold at some length i try to catch up. source in the postscript to stablecoins i quoted david gerard's account of the december th pump that pushed btc over $ k: we saw about million tethers being lined up on binance and huobi in the week previously. these were then deployed en masse. you can see the pump starting at : utc on december. btc was $ , . on coinbase at : utc. notice the very long candles, as bots set to sell at $ , sell directly into the pump. in btc had dropped from around $ . k on march th to under $ k on march th. it spiked back up on march th, then gradually rose to just under $ k by october th. source during that time tether issuance went from $ . b to $ . b, an increase of over % with large jumps on four occasions: march - th: $ . b = $ . b to $ . b (a weekend) may - th $ . b = $ . b to $ . b july th- st $ . b = $ . b to $ b (a weekend) august - th $ . b = $ b to $ . b (a weekend) source then both btc and usdt really took off, with btc peaking april th at $ . k, and usdt issuing more than $ b. btc then started falling. tether continued to issue usdt, peaking days later on may th after nearly another $ b at $ . b. issuance slowed dramatically, peaking days later on june th at $ . b when btc had dropped to $$ . k, % of the peak. since then usdt has faced gradual redemptions; it is now down to $ , b. what on earth is going on? how could usdt go from around $ b to around $ b in just over a year? tether source in crypto and the infinite ladder: what if tether is fake?, the first of a two-part series, fais kahn asks the same question: tether (usdt) is the most used cryptocurrency in the world, reaching volumes significantly higher than bitcoin. each coin is supposed to be backed by $ , making it “stable.” and yet no one knows if this is true. even more odd: in the last year, usdt has exploded in size even faster than bitcoin - going from $ b in market cap to over $ b in less than a year. this includes $ b of new supply - a straight line up - after the new york attorney general accused tether of fraud. i and many others have considered a scenario in which the admitted fact that usdt is not backed -for- by usd causes a "run on the bank". among the latest is taming wildcat stablecoins by gary gorton and jeffery zhang. zhang is one of the federal reserve's attorney, but who is gary gorton? izabella kaminska explains: over the course of his career, gary gorton has gained a reputation for being something of an experts’ expert on financial systems. despite being an academic, this is in large part due to what might be described as his practitioner’s take on many key issues. the yale school of management professor is, for example, best known for a highly respected (albeit still relatively obscure) theory about the role played in bank runs by information-sensitive assets. ... the two authors make the implicit about stablecoins explicit: however you slice them, dice them or frame them in new technology, in the grand scheme of financial innovation stablecoins are actually nothing new. what they really amount to, they say, is another form of information sensitive private money, with stablecoin issuers operating more like unregulated banks. gorton and zhang write: the goal of private money is to be accepted at par with no questions asked. this did not occur during the free banking era in the united states—a period that most resembles the current world of stablecoins. state-chartered banks in the free banking era experienced panics, and their private monies made it very hard to transact because of fluctuating prices. that system was curtailed by the national bank act of , which created a uniform national currency backed by u.s. treasury bonds. subsequent legislation taxed the state-chartered banks’ paper currencies out of existence in favor of a single sovereign currency. unlike me, kahn is a "brown guy in fintech", so he is better placed to come up with answers than i am. for a start, he is skeptical of the usdt "bank run" scenario: the unbacked scenario is what concerns investors. if there were a sudden drop in the market, and investors wanted to exchange their usdt for real dollars in tether’s reserve, that could trigger a “bank run” where the value dropped significantly below one dollar, and suddenly everyone would want their money. that could trigger a full on collapse. but when that might actually happen? when bitcoin falls in the frequent crypto bloodbaths, users actually buy tether - fleeing to the safety of the dollar. this actually drives tether’s price up! the only scenario that could hurt is when bitcoin goes up, and tether demand drops. but hold on. it’s extremely unlikely tether is simply creating tokens out of thin air - at worst, there may be some fractional reserve (they themselves admitted at one point it was only % backed) that is split between usd and bitcoin. the ny ag’s statement that tether had “no bank anywhere in the world” strongly suggests some money being held in crypto (tether has stated this is true, but less than %), and tether’s own bank says they use bitcoin to hold customer funds! that means in the event of a tether drop/bitcoin rise, they are hedged. tether’s own terms of service say users may not be redeemed immediately. forced to wait, many users would flee to bitcoin for lack of options, driving the price up again. kahn agrees with me that tether may have a magic "money" pump: it’s possible tether didn’t have the money at some point in the past. and it’s just as possible that, with the massive run in bitcoin the last year tether now has more than the $ b they claim! in that case tether would seem to have constructed a perfect machine for printing money. (and america has a second central bank.) of course, the recent massive run down in bitcoin will have caused the "machine for printing money" to start running in reverse. matt levine listened to an interview with tether's cto paolo ardoino and general counsel stuart hoegner, and is skeptical about tether's backing: tether is a stablecoin that we have talked about around here because it was sued by the new york attorney general for lying about its reserves, and because it subsequently disclosed its reserves in a format that satisfied basically no one. tether now says that its reserves consist mostly of commercial paper, which apparently makes it one of the largest commercial paper holders in the world. there is a fun game among financial journalists and other interested observers who try to find anyone who has actually traded commercial paper with tether, or any of its actual holdings. the game is hard! as far as i know, no one has ever won it, or even scored a point; i have never seen anyone publicly identify a security that tether holds or a counterparty that has traded commercial paper with it. usdt reserve disclosure levine contrasts tether's reserve disclosure with that of another instrument that is supposed to maintain a stable value, a money market fund: here is the website for the jpmorgan prime money market fund. if you click on the tab labeled “portfolio,” you can see what the fund owns. the first item alphabetically is $ million face amount of asset-backed commercial paper issued by alpine securitization corp. and maturing on oct. . its cusip — its official security identifier — is xmg . there are certificates of deposit at big banks, repurchase agreements, even a little bit of non-financial commercial paper. ... you can see exactly how much (both face amount and market value), and when it matures, and the cusip for each holding. jpmorgan is not on the bleeding edge of transparency here or anything; this is just how money market funds work. you disclose your holdings. binance but the big picture is that usdt pumped $ b into cryptocurrencies. where did the demand for the $ b come from? in my view, some of it comes from whales accumulating dry powder to use in pump-and-dump schemes like the one illustrated above. but kahn has two different suggestions. first: one of the well-known uses for usdt is “shadow banking” - since real us dollars are highly regulated, opening an account with binance and buying usdt is a straightforward way to get a dollar account. the ceo of usdc himself admits in this coindesk article: “in particular in asia where, you know, these are dollar-denominated markets, they have to use a shadow banking system to do it...you can’t connect a bank account in china to binance or huobi. so you have to do it through shadow banking and they do it through tether. and so it just represents the aggregate demand. investors and users in asia – it’s a huge, huge piece of it.” source second: binance also hosts a massive perpetual futures market, which are “cash-settled” using usdt. this allows traders to make leveraged bets of x margin or more...which, in laymen’s terms, is basically a speculative casino. that market alone provides around ~$ b of daily volume, where users deposit usdt to trade on margin. as a result, binance is by far the biggest holder of usdt, with $ b sitting in its wallet. wikipedia describes "perpetual futures" thus: in finance, a perpetual futures contract, also known as a perpetual swap, is an agreement to non-optionally buy or sell an asset at an unspecified point in the future. perpetual futures are cash-settled, and differ from regular futures in that they lack a pre-specified delivery date, and can thus be held indefinitely without the need to roll over contracts as they approach expiration. payments are periodically exchanged between holders of the two sides of the contracts, long and short, with the direction and magnitude of the settlement based on the difference between the contract price and that of the underlying asset, as well as, if applicable, the difference in leverage between the two sides in is tether a black swan? bernhard mueller goes into more detail about binance's market: according to tether’s rich list, billion tron usdt are held by binance alone. the list also shows . b usdt in huobi’s exchange wallets. that’s almost b usdt held by two exchanges. considering those numbers, the value given by cryptoquant appears understated. a more realistic estimate is that ~ % of the tether supply ( . b usdt) is located on centralized exchanges. interestingly, only a small fraction of those usdt shows up in spot order books. one likely reason is that a large share is sitting on wallets to collateralize derivative positions, in particular perpetual futures. the cex futures market is essentially a casino where traders bet on crypto prices with insane amounts of leverage. and it’s a massive market: futures trading on binance alone generated $ billion in volume over the last hours. it’s important to understand that usdt perpetual futures implementations are % usdt-based, including collateralization, funding and settlement. prices are tied to crypto asset prices via clever incentives, but in reality, usdt is the only asset that ever changes hands between traders. this use-case generates significant demand for usdt. why is this "massive perpetual futures market" so popular? kahn provides answers: that crazed demand for margin trading is how we can explain one of the enduring mysteries of crypto - how users can get . % interest on their holdings when banks offer less than %. source the high interest is possible because: the massive supply of usdt, and the host of other dollar stablecoins like usdc, pax, and dai, creates an arbitrage opportunity. this brings in capital from outside the ecosystem seeking the “free money” making trades like this using a combination of x leverage and and . % variance between stablecoins to generate an % profit in just a few seconds. if you’re only holding the bag for a minute, who cares if usdt is imaginary dollars? rollicking good times like these attract the attention of regulators, as amy castor reported on july nd in binance: a crypto exchange running out of places to hide: binance, the world’s largest dark crypto slush fund, is struggling to find corners of the world that will tolerate its lax anti-money laundering policies and flagrant disregard for securities laws. as a result, laurence fletcher, eva szalay and adam samson report that hedge funds back away from binance after regulatory assault : the global regulatory pushback “should raise red flags for anyone keeping serious capital at the exchange”, said ulrik lykke, executive director at ark , adding that the fund has “scaled down” exposure. ... lykke described it as “especially concerning” that the recent moves against binance “involve multiple entities from across the financial sphere”, such as banks and payments groups. this leaves some serious money looking for an off-ramp from usdt to fiat. these are somewhat scarce: if usdt holders on centralized exchanges chose to run for the exits, usd/usdc/busd liquidity immediately available to them would be relatively small. ~ billion usdt held on exchanges would be matched with perhaps ~ billion in fiat currency and usdc/busd this, and the addictive nature of "a casino ... with insane amounts of leverage", probably account for the relatively small drop in usdt market cap since june th. amy castor reported july th on another reason in binance: fiat off-ramps keep closing, reports of frozen funds, what happened to catherine coley?: binance customers are becoming trapped inside of binance — or at least their funds are — as the fiat exits to the world’s largest crypto exchange close around them. you can almost hear the echoes of doors slamming, one by one, down a long empty corridor leading to nowhere. in the latest bit of unfolding drama, binance told its customers today that it had disabled withdrawals in british pounds after its key payment partner, clear junction, ended its business relationship with the exchange. ... there’s a lot of unhappy people on r/binanceus right now complaining their withdrawals are frozen or suspended — and they can’t seem to get a response from customer support either. ... binance is known for having “maintenance issues” during periods of heavy market volatility. as a result, margin traders, unable to exit their positions, are left to watch in horror while the exchange seizes their margin collateral and liquidates their holdings. and it isn't just getting money out of binance that is getting hard, as david gerard reports: binance is totally not insolvent! they just won’t give anyone their cryptos back because they’re being super-compliant. kyc/aml laws are very important to binance, especially if you want to get your money back after suspicious activity on your account — such as pressing the “withdraw” button. please send more kyc. [binance] issues like these tend to attract the attention of the mainstream press. on july rd the new york times' eric lipton and ephrat livni profiled sam bankman-fried of the ftx exchange in crypto nomads: surfing the world for risk and profit: the highly leveraged form of trading these platforms offer has become so popular that the overall value of daily purchases and sales of these derivatives far surpasses the daily volume of actual cryptocurrency transactions, industry data analyzed by researchers at carnegie mellon university shows. ... ftx alone has one million users across the world and handles as much as $ billion a day in transactions, most of them derivatives trades. like their customers, the platforms compete. mr. bankman-fried from ftx, looking to out promote bitmex, moved to offer up to times leverage on derivatives trades. mr. zhao from binance then bested them both by taking it to . then on the th, as the regulators' seriousness sank in, the same authors reported leaders in cryptocurrency industry move to curb the highest-risk trades: two of the world’s most popular cryptocurrency exchanges announced on sunday that they would curb a type of high-risk trading that has been blamed in part for sharp fluctuations in the value of bitcoin and the casino-like atmosphere on such platforms globally. the first move came from the exchange, ftx, which said it would reduce the size of the bets investors can make by lowering the amount of leverage it offers to times from times. leverage multiplies the traders’ chance for not only profit, but also loss. ... about hours later, changpeng zhao [cz], the founder of binance, the world’s largest cryptocurrency exchange, echoed the move by ftx, announcing that his company had already started to limit leverage to times for new users and it would soon expand this limit to other existing clients. early the next day, tom schoenberg, matt robinson, and zeke faux reported for bloomberg that tether executives said to face criminal probe into bank fraud: u.s. probe into tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case that would have broad implications for the cryptocurrency market. tether’s pivotal role in the crypto ecosystem is now well known because the token is widely used to trade bitcoin. but the justice department investigation is focused on conduct that occurred years ago, when tether was in its more nascent stages. specifically, federal prosecutors are scrutinizing whether tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential. federal prosecutors have been circling tether since at least . in recent months, they sent letters to individuals alerting them that they’re targets of the investigation, one of the people said. source once again, david gerard pointed out the obvious market manipulation: this week’s “number go up” happened several hours before the report broke — likely when the bloomberg reporter contacted tether for comment. btc/usd futures on binance spiked to $ , , and the btc/usd price on coinbase spiked at $ , shortly after. here’s the one-minute candles on coinbase btc/usd around : utc ( am bst on this chart) on july — the price went up $ , in three minutes. you’ve never seen something this majestically organic and so did amy castor in the doj’s criminal probe into tether — what we know: last night, before the news broke, bitcoin was pumping like crazy. the price climbed nearly %, topping $ , . on coinbase, the price of btc/usd went up $ , in three minutes, a bit after : utc. after a user placed a large number of buy orders for bitcoin perpetual futures denominated in tethers (usdt) on binance — an unregulated exchange struggling with its own banking issues — the btc/usdt perpetual contract hit a high of $ , at around : utc on the exchange. bitcoin pumps are a good way to get everyone to ignore the impact of bad news and focus on number go up. “hey, this isn’t so bad. bitcoin is going up in price. i’m rich!” source as shown in the graph, the perpetual futures market is at least an order of magnitude larger than the spot market upon which it is based. and as we saw for example on december th and july th, the spot market is heavily manipulated. pump-and-dump schemes in the physical market are very profitable, and connecting them to the casino in the futures market with its insane leverage can juice profitability enormously. tether and binance fais kahn's second part, bitcoin's end: tether, binance and the white swans that could bring it all down, explores the mutual dependency between tether and binance: there are $ b tokens for usdt in circulation, much of which exists to fuel the massive casino that is the perpetual futures market on binance. these complex derivatives markets, which are illegal to trade in the us, run in the tens of billions and help drive up the price of bitcoin by generating the basis trade. the "basis trade": involves buying a commodity at spot (taking a long position) and simultaneously establishing a short position through derivatives like options or futures contracts kahn continues: for binance to allow traders to make such crazy bets, it needs collateral to make sure if traders get wiped out, binance doesn’t go bankrupt. that collateral is now an eye-popping $ b, having grown from $ b in february and $ b in may: but for that market to work, binance needs usdt. and getting fresh usdt is a problem now that the exchange, which has always been known for its relaxed approach to following the laws, is under heavy scrutiny from the us department of justice and irs: so much so that their only us dollar provider, silvergate bank, recently terminated their relationship, suggesting major concerns about the legality of some of binance’s activities. this means users can no longer transfer us dollars from their bank to binance, which were likely often used to fund purchases of usdt. since that shutdown, the linkages between binance, usdt, and the basis trade are now clearer than ever. in the last month, the issuance of usdt has completely stopped: likewise, futures trading has fallen significantly. this confirms that most of the usdt demand likely came from leveraged traders who needed more and more chips for the casino. meanwhile, the basis trade has completely disappeared at the same time. which is the chicken and which is the egg? did the massive losses in bitcoin kill all the craziest players and end the free money bonanza, or did binance’s banking troubles choke off the supply of dollars, ending the game for everyone? either way, the link between futures, usdt, and the funds flooding the crypto world chasing free money appears to be broken for now. this is a problem for binance: right now tether is binance’s $ b problem. at this point, binance is holding so much tether the exchange is far more dependent on usdt’s peg staying stable than it is on any of its banking relationships. if that peg were to break, binance would likely see capital flight on a level that would wreak untold havoc in the crypto markets ... regulators have been increasing the pace of their enforcements. in other words, they are getting pissed, and the bitmex founders going to jail is a good example of what might await. binance has been doing all it can to avoid scrutiny, and you have to award points for creativity. the exchange was based in malta, until malta decided binance had “no license” to operate there, and that malta did not have jurisdiction to regulate them. as a result, cz began to claim that binance “doesn’t have” a headquarters. wonder why? perhaps to avoid falling under anyone’s direct jurisdiction, or to avoid a paper trail? cz went on to only reply that he is based in “asia.” given what china did to jack ma recently, we can empathize with a desire to stay hidden, particularly when unregulated exchanges are a key rail for evading china’s strict capital controls. any surprise that the cfo quit last month? but it is also a problem for tether: here’s what could trigger a cascade that could bring the exchange down and much of crypto with it: the doj and irs crack down on binance, either by filing charges against cz or pushing biden and congress to give them the death penalty: full on sanctions. this would lock them out of the global financial system, cause withdrawals to skyrocket, and eventually drive them to redeem that $ b of usdt they are sitting on. and what will happen to tether if they need to suddenly sell or redeem those billions? we have no way of knowing. even if fully collateralized, tether would need to sell billions in commercial paper on short notice. and in the worst case, the peg would break, wreaking absolute havoc and crushing crypto prices. alternatively it’s possible that regulators will move as slow as they have been all along - with one country at a time unplugging binance from its banking system until the exchange eventually shrinks down to be less of a systemic risk than it is. that's my guess — it will become increasingly difficult either to get usd or cryptocurrency out of binance's clutches, or to send them fiat, as banks around the world realize that doing business with binance is going to get them in trouble with their regulators. once customers realize that binance has become a "roach motel" for funds, and that about % of usdt is locked up there, things could get quite dynamic. kahn concludes: everything around binance and tether is murky, even as these entities two dominate the crypto world. tether redemptions are accelerating, and binance is in trouble, but why some of these things are happening is guesswork. and what happens if something happens to one of those two? we’re entering some uncharted territory. but if things get weird, don’t say no one saw it coming. policy responses gorton and zhang argue that the modern equivalent of the "free banking" era is fraught with too many risks to tolerate. david gerard provides an overview of the era in stablecoins through history — michigan bank commissioners report, : the wildcat banking era, more politely called the “free banking era,” ran from to . banks at this time were free of federal regulation — they could launch just under state regulation. under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. the quality of these reserves could be a matter of some dispute. the wildcat banks didn’t work out so well. the national bank act was passed in , establishing the united states national banking system and the office of the comptroller of the currency — and taking away the power of state banks to issue paper notes. gerard's account draws from a report of michigan's state banking commissioners, documents accompanying the journal of the house of representatives of the state of michigan, pp. – , which makes clear that tether's lack of transparency as to its reserves isn't original. banks were supposed to hold "specie" (money in the form of coin) as backing but: the banking system at the time featured barrels of gold that were carried to other banks, just ahead of the inspectors for example, the commissioners reported that: the farmers’ and mechanics’ bank of pontiac, presented a more favorable exhibit in point of solvency, but the undersigned having satisfactorily informed himself that a large proportion of the specie exhibited to the commissioners, at a previous examination, as the bona fide property of the bank, under the oath of the cashier, had been borrowed for the purpose of exhibition and deception; that the sum of ten thousand dollars which had been issued for “exchange purposes,” had not been entered on the books of the bank, reckoned among its circulation, or explained to the commissioners. gorton and zhang summarize the policy choices thus: based on historical lessons, the government has a couple of options: ( ) transform stablecoins into the equivalent of public money by (a) requiring stablecoins to be issued through fdic- insured banks or (b) requiring stablecoins to be backed one-for-one with treasuries or reserves at the central bank; or ( ) introduce a central bank digital currency and tax private stablecoins out of existence. their suggestions for how to implement the first option include: the interpretation of section of the glass-steagall act, under which "it is unlawful for a non-bank entity to engage in deposit-taking" the interpretation of title viii of the dodd-frank act, under which the financial stability oversight council could "designate stablecoin issuance as a systemic payment activity". this "would give the federal reserve the authority to regulate the activity of stablecoin issuance by any financial institution." congress could pass legislation that requires stablecoin issuers to become fdic-insured banks or to run their business out of fdic-insured banks. as a result, stablecoin issuers would be subject to regulations and supervisory activities that come along with being an fdic-insured bank. alternatively, the second option would involve: congress could require the federal reserve to issue a central bank digital currency as a substitute to privately produced digital money like stablecoins ... the question then becomes whether policymakers would want to have central bank digital currencies coexist with stablecoins or to have central bank digital currencies be the only form of money in circulation. as discussed previously, congress has the legal authority to create a fiat currency and to tax competitors of that uniform national currency out of existence. they regard the key attribute of an instrument that acts as money to be that it is accepted at face value "no questions asked" (nqa). thus, based on history they ask: in other words, should the sovereign have a monopoly on money issuance? as shown by revealed preference in the table below, the answer is yes. the provision of nqa money is a public good, which only the government can supply. posted by david. at : am labels: bitcoin comments: david. said... katanga johnson reports for reuters that u.s. sec chair gensler calls on congress to help rein in crypto 'wild west': "gary gensler said the crypto market involves many tokens which may be unregistered securities and leaves prices open to manipulation and millions of investors vulnerable to risks. "this asset class is rife with fraud, scams and abuse in certain applications," gensler told a global conference. "we need additional congressional authorities to prevent transactions, products and platforms from falling between regulatory cracks." ... he also called on lawmakers to give the sec more power to oversee crypto lending, and platforms like peer-to-peer decentralized finance (defi) sites that allow lenders and borrowers to transact in cryptocurrencies without traditional banks." august , at : pm david. said... carol alexander's binance’s insurance fund is a fascinating, detailed examination of binance's extremely convenient "outage" as btc crashed on may : "how insufficient insurance funds might explain the outage of binance’s futures platform on may and the potentially toxic relationship between binance and tether." august , at : am david. said... gary gensler's "wild west" comment is refuted in david segal's hysterically funny going for broke in cryptoland: "cryptoland is often likened to the wild west, but that’s unfair to the wild west. it had sheriffs, courts, the occasional posse. there isn’t a cop in sight in cryptoland. if someone steals your crypto, tough." other snippets include: "the journey from pancakeswap to my crypto wallet took four and a half hours. which pointed up another surprise about cryptoland. it’s absurdly slow." and: "crypto also offers something deeper and more gratifying than a regular investment. it offers meaning. the more time you spend in a cryptocurrency chat the more elements it seems to share with a religious sect. belief is required. heretics, in the form of those telegram dissenters, are banished. and if you stick around long enough, the proselytizing begins. “once you start seeing the potential of this project for the rest of the world,” mr. danci told me during the telegram chat, “you will want to start promoting it yourself because it is really game changing.” “resistance is futile!” someone piped up, with a laugh. investing in crypto holds out the prospect of a jackpot and the chance to bond over a shared catechism. it’s like a church social in a casino. one attendee said he spent about hours a day on the feg chat. “i talk to these people more than i talk to the friends i grew up with,” he said." august , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ▼  august ( ) the economist on cryptocurrencies stablecoins part ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. recent uploads tagged code lib recent uploads tagged code lib img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_ img_
    fatal error: cannot declare class wp_block_template, because the name is already in use in /home/customer/www/acrl.ala.org/public_html/techconnect/wp-content/plugins/gutenberg/lib/full-site-editing/class-wp-block-template.php on line
    getting real: the smarter, faster, easier way to build a successful web application | basecamp skip to content how it works before & after got clients? pricing support sign in try it free how it works before & after got clients? pricing support sign in try basecamp free getting real a must read for anyone building a web app. getting real is packed with keep-it-simple insights, contrarian points of view, and unconventional approaches to software design. this isn't a technical book or a design tutorial, it's a book of ideas. anyone working on a web app - including entrepreneurs, designers, programmers, executives, or marketers - will find value and inspiration in this book. read it online download a pdf “i got more out of reading this little e-book than just about any other computer-related book i’ve ever read on any topic that i can possibly think of. whoa.” -jared white “getting real is now officially our ‘bible.’” -bill emmack “i can honestly say that this is the first book i’ve read about software development that has been able to reignite my passion for the process. it is an incredible and very relevant book. thank you guys for publishing it.” -anthony papillion full list of essays included in the book introduction what is getting real? about basecamp caveats, disclaimers, and other preemptive strikes the starting line build less what’s your problem? fund yourself fix time and budget, flex scope have an enemy it shouldn’t be a chore stay lean less mass lower your cost of change the three musketeers embrace constraints be yourself priorities what’s the big idea? ignore details early on it’s a problem when it’s a problem hire the right customers scale later make opinionated software feature selection half, not half-assed it just doesn’t matter start with no hidden costs can you handle it? human solutions forget feature requests hold the mayo process race to running software rinse and repeat from idea to implementation avoid preferences “done!” test in the wild shrink your time the organization unity alone time meetings are toxic seek and celebrate small victories staffing hire less and hire later kick the tires actions, not words get well rounded individuals you can’t fake enthusiasm wordsmiths interface design interface first epicenter design three state solution the blank slate get defensive context over consistency copywriting is interface design one interface code less software optimize for happiness code speaks manage debt open doors words there’s nothing functional about a functional spec don’t do dead documents tell me a quick story use real words personify your product pricing and signup chapter free samples easy on, easy off silly rabbit, tricks are for kids a softer bullet promotion hollywood launch a powerful promo site ride the blog wave solicit early promote through education feature food track your logs inline upsell name hook support feel the pain zero training answer quick tough love in fine forum publicize your screwups post-launch one month tuneup keep the posts coming better, not beta all bugs are not created equal ride out the storm keep up with the joneses beware the bloat monster go with the flow other books by basecamp shape up it doesn't have to be crazy at work rework remote: office not required basecamp apps: ios, android, mac, and pc, integrations. company: about us, podcast, blog, books, handbook, newsletter. guides: going remote, team communication, group chat: group stress. our new app: hey - email at its best. fine print: customer rights, privacy & terms, uptime, system status. copyright © - basecamp. all rights reserved. enjoy the rest of your day! binance: a crypto exchange running out of places to hide – amy castor primary menu amy castor independent journalist about me selected clips contact me blog subscribe to blog via email enter your email address to subscribe to this blog and receive notifications of new posts by email. join , other followers email address: subscribe twitter updates fyi - i'm taking an actual vacation for the next week, so i'll be quiet on twitter and not following the news so mu… twitter.com/i/web/status/ …  day ago rt @davidgerard: news: the senate hates bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network…  day ago rt @franciskim_co: @wublockchain translation https://t.co/hpqefljhpu  day ago rt @wublockchain: the chinese government is cracking down on fraud. they posted a fraud case involving usdt on the wall to remind the publi…  day ago rt @patio : the core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us…  days ago recent comments cryptobuy on binance: fiat off-ramps keep c… steve on binance: a crypto exchange run… amy castor on el salvador’s bitcoin plan: ta… amy castor on el salvador’s bitcoin plan: ta… clearwell trader on el salvador’s bitcoin plan: ta… skip to content amy castor binance: a crypto exchange running out of places to hide binance, the world’s largest dark crypto slush fund, is struggling to find corners of the world that will tolerate its lax anti-money laundering policies and flagrant disregard for securities laws.  on thursday, the cayman islands monetary authority issued a statement that binance, the binance group and binance holdings limited are not registered, licensed, regulated, or otherwise authorized to operate a crypto exchange in the cayman islands. “following recent press reports that have referred to binance, the binance group and binance holdings limited as being a crypto-currency company operating an exchange based in the cayman islands, the authority reiterates that binance, the binance group or binance holdings limited are not subject to any regulatory oversight by the authority,” the statement said. this is clearly cima reacting to everyone else blaming binance on the caymans, where it’s been incorporated since .  on friday, thailand’s security and exchange commission filed a criminal complaint against the crypto exchange for operating a digital asset business without a license within its borders.  last week, binance opted to close up shop in ontario rather than meet the fate of other cryptocurrency exchanges that have had actions filed against them for allegedly failing to comply with ontario securities laws. singapore’s central bank, the monetary authority of singapore, said thursday that it would look into binance asia services pte., the local unit of binance holdings, bloomberg reported.  the binance subsidiary applied for a license to operate in singapore. while it awaits a review of its license application, binance asia services has a grace period that allows it to continue to operate in the city-state.  “we are aware of the actions taken by other regulatory authorities against binance and will follow up as appropriate,” the mas said in a statement. on june , the uk’s financial conduct authority issued a consumer warning that binance’s uk entity, binance markets limited, was prohibited from doing business in the country.  “due to the imposition of requirements by the fca, binance markets limited is not currently permitted to undertake any regulated activities without the prior written consent of the fca,” the regulator said. it continued: “no other entity in the binance group holds any form of uk authorisation, registration or licence to conduct regulated activity in the uk.”  following the uk’s financial watchdog crackdown, binance customers were temporarily frozen out of faster payments, a major uk interbank payments platform. withdrawals were reinstated a few days later. only a few days before, japan’s financial services agency issued a warning that binance was operating in the country without a license. (as i explain below, this is the second time the fsa has issued such a warning.) last summer, malaysia’s securities commission also added binance to its list of unauthorised entities, indicating binance was operating without a license in the malaysian market. a history of bouncing around binance offers a wide range of services, from crypto spot and derivatives trading to tokenized versions of corporate stocks. it also runs a major crypto exchange and has its own cryptocurrency, binance coin (bnb), currently the fifth largest crypto by market cap, according to coinmarketcap.  the company was founded in hong kong in the summer of by changpeng zhao, more commonly known as “cz.”  china banned bitcoin exchanges a few months later, and ever since, binance has been bouncing about in search of a more tolerant jurisdiction to host its offices and servers.   its first stop after hong kong was japan, but japan was quick to put up the “you’re not welcome here” sign. the country’s financial services agency sent binance its first warning in march .  “the exchange has irked the fsa by failing to verify the identification of japanese investors at the time accounts are opened. the japanese officials suspect binance does not have effective measures to prevent money laundering; the exchange handles a number of virtual currencies that are traded anonymously,” nikkei wrote.  binance responded by moving its corporate registration to the cayman islands and opening a branch office in malta, the ft reported in march . in february , however, maltese authorities announced binance was not licensed to do business in the island country.  “following a report in a section of the media referring to binance as a ‘malta-based cryptocurrency’ company, the malta financial services authority (mfsa) reiterates that binance is not authorised by the mfsa to operate in the crypto currency sphere and is therefore not subject to regulatory oversight by the mfsa.” the ‘decentralized’ excuse cz lives in singapore but has continually refused to say where his company is headquartered, insisting over and over again that binance is decentralized. this is absolute nonsense, of course. the company is run by real people and its software runs on real servers. the problem is, cz, whose net worth forbes estimated to be $ billion in , doesn’t want to abide by real laws.  as a result, his company faces a slew of other problems.  binance is currently under investigation by the us department of justice and the internal revenue service, bloomberg reported in may. it’s also being probed by the commodity futures trading commission over whether it allowed us residents to place wagers on the exchange, according to another bloomberg report.  also in may, germany’s financial regulator bafin warned that binance risked being fined for offering its securities-tracking tokens without publishing an investor prospectus. binance offers “stock tokens” representing microstrategy, microsoft, apple, tesla, and coinbase global.   binance has for five years done whatever it pleases, all the while using the excuse of “decentralization” to ignore laws and regulations. regulators are finally putting their collective foot down. enough is enough. image: changpeng zhao, youtube if you like my work, please subscribe to my patreon account for as little as $ a month.  share this: twitter facebook linkedin like this: like loading... bafinbinanceczfinancial conduct authoritymonetary authority of singapore posted on july , july , by amy castor in blogging post navigation previous postnotes on nfts, the high-art trade, and money laundering next postrsa conference goes full blockchain, for a moment one thought on “binance: a crypto exchange running out of places to hide” steve says: july , at : am the sooner cz is out of the crypto space the better! reply leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (address never made public) name website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. create a website or blog at wordpress.com %d bloggers like this: none dshr's blog: economics of evil revisited dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, july , economics of evil revisited eight years ago i wrote economics of evil about the death of google reader and google's habit of leaving its customers users in the lurch. in the comments to the post i started keeping track of accessions to le petit musée des projets google abandonnés. so far i've recorded at least dead products, an average of more than a year. two years ago ron amadeo wrote about the problem this causes in google’s constant product shutdowns are damaging its brand: we are days into the year, and so far, google is racking up an unprecedented body count. if we just take the official shutdown dates that have already occurred in , a google-branded product, feature, or service has died, on average, about every nine days. below the fold, some commentary on amadeo's latest report from the killing fields, in which he detects a little remorse. belatedly, someone at google seems to have realized that repeatedly suckering people into using one of your products then cutting them off at the knees, in some cases with one week's notice, can reduce their willingness to use your other products. and they are trying to do something about it, as amadeo writes in google cloud offers a model for fixing google’s product-killing reputation: a google division with similar issues is google cloud platform, which asks companies and developers to build a product or service powered by google's cloud infrastructure. like the rest of google, cloud platform has a reputation for instability, thanks to quickly deprecating apis, which require any project hosted on google's platform to be continuously updated to keep up with the latest changes. google cloud wants to address this issue, though, with a new "enterprise api" designation. what google means by "enterprise api" is: our working principle is that no feature may be removed (or changed in a way that is not backwards compatible) for as long as customers are actively using it. if a deprecation or breaking change is inevitable, then the burden is on us to make the migration as effortless as possible. they then have this caveat: the only exception to this rule is if there are critical security, legal, or intellectual property issues caused by the feature. and go on to explain what should happen: customers will receive a minimum of one year’s notice of an impending change, during which time the feature will continue to operate without issue. customers will have access to tools, docs, and other materials to migrate to newer versions with equivalent functionality and performance. we will also work with customers to help them reduce their usage to as close to zero as possible. this sounds good, but does anyone believe if google encountered "critical security, legal, or intellectual property issues" that meant they needed to break customer applications they'd wait a year before fixing them? amadeo points out that: despite being one of the world's largest internet companies and basically defining what modern cloud infrastructure looks like, google isn't doing very well in the cloud infrastructure market. analyst firm canalys puts google in a distant third, with percent market share, behind microsoft azure ( percent) and market leader amazon web services ( percent). rumor has it (according to a report from the information) that google cloud platform is facing a deadline to beat aws and microsoft, or it will risk losing funding. the linked story from actually says: while the company has invested heavily in the business since last year, google wants its cloud group to outrank those of one or both of its two main rivals by on canalys numbers, the "and" target to beat (aws plus azure) has happy customers forming % of the market. so there is % of the market up for grabs. if google added every single one of them to its % they still wouldn't beat a target of "both". adding six times their customer base in years isn't a realistic target. even the "or" target of azure is unrealistic. since google's market share has been static while azure's has been growing slowly. catching up in the years remaining would involve adding % of google's current market share. so le petit musée better be planning to enlarge its display space to make room for a really big new exhibit in . posted by david. at : am labels: cloud economics comment: david. said... writing about gartner's latest cloud report, tim anderson recounts google's customers unhappiness: "expressed concern about low post-sales satisfaction when dealing with google cloud platform (gcp), aggressive pricing that may not be maintained, and that gcp is the only hyperscale provider reporting a financial loss for this part of its business ($ m)." august , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ▼  july ( ) economics of evil revisited yet another dna storage technique alternatives to proof-of-work a modest proposal about ransomware intel did a boeing graphing china's cryptocurrency crackdown ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. none dshr's blog: a modest proposal about ransomware dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, july , a modest proposal about ransomware on the evening of july nd the revil ransomware gang exploited a -day vulnerability to launch a supply chain attack on customers of kaseya's virtual system administrator (vsa) product. the timing was perfect, with most system administrators off for the july th long weekend. by the th alex marquardt reported that kaseya says up to , businesses compromised in massive ransomware attack. revil, which had previously extorted $ m from meat giant jbs, announced that for the low, low price of only $ m they would provide everyone with a decryptor. the us government's pathetic response is to tell the intelligence agencies to investigate and to beg putin to crack down on the ransomware gangs. good luck with that! it isn't his problem, because the gangs write their software to avoid encrypting systems that have default languages from the former ussr. i've writtten before (here, here, here) about the importance of disrupting the cryptocurrency payment channel that enables ransomware, but it looks like the ransomware crisis has to get a great deal worse before effective action is taken. below the fold i lay out a modest proposal that could motivate actions that would greatly reduce the risk. it turns out that the vulnerability that enabled the revil attack didn't meet the strict definition of a -day. gareth corfield's white hats reported key kaseya vsa flaw months ago. ransomware outran the patch explains: rewind to april, and the dutch institute for vulnerability disclosure (divd) had privately reported seven security bugs in vsa to kaseya. four were fixed and patches released in april and may. three were due to be fixed in an upcoming release, version . . . unfortunately, one of those unpatched bugs – cve- - , a credential-leaking logic flaw discovered by divd's wietse boonstra – was exploited by the ransomware slingers before its fix could be emitted. divd praised kaseya's response: once kaseya was aware of our reported vulnerabilities, we have been in constant contact and cooperation with them. when items in our report were unclear, they asked the right questions. also, partial patches were shared with us to validate their effectiveness. during the entire process, kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get this issue fixed and their customers patched. they showed a genuine commitment to do the right thing. unfortunately, we were beaten by revil in the final sprint, as they could exploit the vulnerabilities before customers could even patch. but if kaseya's response to divd's disclosure was praisworthy, it turns out it was the exception. in kaseya was warned about security flaws years ahead of ransomware attack by j., fingas reports that: the giant ransomware attack against kaseya might have been entirely avoidable. former staff talking to bloomberg claim they warned executives of "critical" security flaws in kaseya's products several times between and , but that the company didn't truly address them. multiple staff either quit or said they were fired over inaction. employees reportedly complained that kaseya was using old code, implemented poor encryption and even failed to routinely patch software. the company's virtual system administrator (vsa), the remote maintenance tool that fell prey to ransomware, was supposedly rife with enough problems that workers wanted the software replaced. one employee claimed he was fired two weeks after sending executives a -page briefing on security problems. others simply left in frustration with a seeming focus on new features and releases instead of fixing basic issues. kaseya also laid off some employees in in favor of outsourcing work to belarus, which some staff considered a security risk given local leaders' partnerships with the russian government. ... the company's software was reportedly used to launch ransomware at least twice between and , and it didn't significantly rethink its security strategy. to reiterate: the july nd attack was apparently at least the third time kaseya had infected customers with ransomware! kaseya outsourced development to belarus, a country where ransomware gangs have immunity!. kaseya fired security whistleblowers! the first two incidents didn't seem to make either kaseya or its customers re-think what they were doing. clearly, the only reason kaseya responded to divd's warning was the threat of public disclosure. without effective action to change this attitude the ransomware crisis will definitely result in what stephen diehl calls the oncoming ransomware storm: imagine a hundred new stuxnet-level exploits every day, for every piece of a equipment in public works and health care. where every day your check your phone for the level of ransomware in the wild just like you do the weather. entire cities randomly have their metro systems, water, power grids and internet shut off and on like a sudden onset of bad cybersecurity “weather”. or a time in business in which every company simply just allocates a portion of its earnings upfront every quarter and pre-pays off large ransomware groups in advance. it’s just a universal cost of doing business and one that is fully sanctioned by the government because we’ve all just given up trying to prevent it and it’s more efficient just to pay the protection racket. to make things worse, companies can insure against the risk of ransomware, essentially paying to avoid the hassle of maintaining security. insurance companies can't price these policies properly, because they can't do enough underwriting to know, for example, whether the customer's backups actually work and whether they are offline enough so the ransomware doesn't encrypt them too. in cyber insurance model is broken, consider banning ransomware payments, says think tank gareth corfield reports on the royal united services institute's (rusi) latest report, cyber insurance and the cyber security challenge: unfortunately, rusi's researchers found that insurers tend to sell cyber policies with minimal due diligence – and when the claims start rolling in, insurance company managers start looking at ways to escape an unprofitable line of business. ... rusi's position on buying off criminals is unequivocal, with [jason] nurse and co-authors jamie maccoll and james sullivan saying in their report that the uk's national security secretariat "should conduct an urgent policy review into the feasibility and suitability of banning ransom payments." the fundamental problem is that neither the software vendors nor the insurers nor their customers are taking security seriously enough because it isn't a big enough crisis yet. the solution? take control of the crisis and make it big enough that security gets taken seriously. the us always claims to have the best cyber-warfare capability on the planet, so presumably they could do ransomware better and faster than gangs like revil. the us should use this capability to mount ransomware attacks against us companies as fast as they can. victims would see, instead of a screen demanding a ransom in monero to decrypt their data, a screen saying: us government cybersecurity agency patch the following vulnerabilities immediately! the cybersecurity agency (csa) used some or all of the following vulnerabilities to compromise your systems and display this notice: cve- -xxxxx cve- -yyyyy cve- -zzzzz three days from now if these vulnerabilities are still present, the csa will encrypt your data. you will be able to obtain free decryption assistance from the csa once you can prove that these vulnerabilities are no longer present. if the victim ignored the notice, three days later they would see: us government cybersecurity agency the cybersecurity agency (csa) used some or all of the following vulnerabilities to compromise your systems and encrypt your data: cve- -xxxxx cve- -yyyyy cve- -zzzzz once you have patched these vulnerabilities, click here to decrypt your data three days from now if these vulnerabilities are still present, the csa will re-encrypt your data. for a fee you will be able to obtain decryption assistance from the csa once you can prove that these vulnerabilities are no longer present. the program would start out fairly gentle and ramp up, shortening the grace period to increase the impact. the program would motivate users to keep their systems up-to-date with patches for disclosed vulnerabilities, which would not merely help with ransomware, but also with botnets, data breaches and other forms of malware. it would also raise the annoyance factor customers face when their supplier fails to provide adequate security in their products. this in turn would provide reputational and sales pressure on suppliers to both secure their supply chain and, unlike kaseya, prioritize security in their product development. of course, the program above only handles disclosed vulnerabilities, not the -days revil used. there is an flourishing trade in -days, of which the nsa is believed to be a major buyer. the supply in these markets is increasing, as dan goodin reports in ios zero-day let solarwinds hackers compromise fully updated iphones: in the first half of this year, google’s project zero vulnerability research group has recorded zero-day exploits used in attacks— more than the total number from . the growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through. the other big driver is the increased supply of zero-days from private companies selling exploits. “ -day capabilities used to be only the tools of select nation-states who had the technical expertise to find -day vulnerabilities, develop them into exploits, and then strategically operationalize their use,” the google researchers wrote. “in the mid-to-late s, more private companies have joined the marketplace selling these -day capabilities. no longer do groups need to have the technical expertise; now they just need resources.” the ios vulnerability was one of four in-the-wild zero-days google detailed on wednesday. ... based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. as has been true since the cold-war era and the "crypto wars" of the s when cryptography was considered a munition, the us has prioritized attack over defense. the nsa routinely hoards -days, preferring to use them to attack foreigners rather than disclose them to protect us citizens (and others). this short-sighted policy has led to several disasters, including the juniper supply-chain compromise and notpetya. senators wrote to the head of the nsa, and the eff sued the director of national intelligence, to obtain the nsa's policy around -days: since these vulnerabilities potentially affect the security of users all over the world, the public has a strong interest in knowing how these agencies are weighing the risks and benefits of using zero days instead of disclosing them to vendors, it would be bad enough if the nsa and other nations' security services were the only buyers of -days. but the $ m revil received from jbs buys a lot of them, and if each could net $ m they'd be a wonderful investment. forcing ransomware gangs to use -days by getting systems up-to-date with patches is good, but the gangs will have -days to use. so although the program above should indirectly reduce the supply (and thus increase the price) of -days by motivating vendors to improve their development and supply chain practices, something needs to be done to reduce the impact of -days on ransomware. the colonial pipeline and jbs attacks, not to mention the multiple hospital chains that have been disrupted, show that it is just a matter of time before a ransomware attack has a major impact on us gdp (and incidentally on us citizens). in this light, the idea that nsa should stockpile -days for possible future use is counter-productive. at any time -days in the hoard might leak, or be independently discovered. in the past the fallout from this was limited, but no longer; they might be used for a major ransomware attack. is the national security agency's mission to secure the united states, or to have fun playing team america: world police in cyberspace? unless they are immediately required for a specific operation, the nsa should disclose -days it discovers or purchases to the software vendor, and once patched, add them to the kit it uses to run its "ransomware" program. to do less is to place the us economy at risk. ps: david sanger reported tuesday that russia’s most aggressive ransomware group disappeared. it’s unclear who disabled them.: just days after president biden demanded that president vladimir v. putin of russia shut down ransomware groups attacking american targets, the most aggressive of the groups suddenly went off-line early tuesday. ... a third theory is that revil decided that the heat was too intense, and took the sites down itself to avoid becoming caught in the crossfire between the american and russian presidents. that is what another russian-based group, darkside, did after the ransomware attack on colonial pipeline, ... but many experts think that darkside’s going-out-of-business move was nothing but digital theater, and that all of the group’s key ransomware talent will reassemble under a different name. this is by far the most likely explanation for revil's disappearance, leaving victims unable to pay. the same day, bogdan botezatu and radu tudorica reported that trickbot activity increases; new vnc module on the radar: the trickbot group, which has infected millions of computers worldwide, has recently played an active role in disseminating ransomware. we have been reporting on notable developments in trickbot’s lifecycle, with highlights including the analysis in of one of its modules used to bruteforce rdp connections and an analysis of its new c infrastructure in the wake of the massive crackdown in october . despite the takedown attempt, trickbot is more active than ever. in may , our systems started to pick up an updated version of the vncdll module that trickbot uses against select high-profile targets. as regards the "massive crackdown", ravie lakshmanan notes: the botnet has since survived two takedown attempts by microsoft and the u.s. cyber command, update: source via barry ritholtz we find this evidence of willie sutton's law in action. when asked "why do you rob banks?", sutton replied "because that's where the money is." source and, thanks to jack cable, there's now ransomwhe.re, which tracks ransomware payments in real time. it suffers a bit from incomplete data. because it depends upon tracking bitcoin addresses, it will miss the increasing proportion of demands that insist on monero. posted by david. at : am labels: malware, security comments: unknown said... companies could also stop creating monoculture networks that are easy to manage and also easy to compromise. when every device is a domain joined windows machine running some low level centralized remote management system, it's just a matter of time before you are completely owned. this is the "encryption backdoor" problem in computer science (aka "exceptional access systems"). it is impossible to build an exceptional access system and then ensure it is only used by good people to do good things. july , at : am david. said... lorenzo franceschi-bicchierai reports on today's -day news in mysterious israeli spyware vendor’s windows zero-days caught in the wild: "citizen lab concluded that the malware and the zero-days were developed by candiru, a mysterious israel-based spyware vendor that offers “high-end cyber intelligence platform dedicated to infiltrate pc computers, networks, mobile handsets," according to a document seen by haaretz. candiru was first outed by the israeli newspaper in , and has since gotten some attention from cybersecurity companies such as kaspersky lab." july , at : pm alwyn schoeman said... how could we exploit the russian locale exception? july , at : pm unknown said... the correct response is to copy the law that the eu passed that can fine companies up to % of revenue for lax cyber security. equifax, solarwinds and kaseya all had lax security that caused untold damage to businesses and the public. i do not support letting cyber criminals get away without punishment, but i do support holding companies liable for gross negligence in cyber security. july , at : pm david. said... brian krebs originally suggested to try this one weird trick russian hackers hate by installing russian keyboard support, but the bad guys figured out quickly that what they needed to test was not the keyboard support but the default language. so unless you want your machine to talk to you in cyrillic, forget it. hat tip to bruce schneier. july , at : am david. said... i can't find anything that says the eu can impose % of revenue. when the uk implemented the eu regulations in (my emphasis): "some of these organisations could be liable for fines of up to £ million - or four per cent of global turnover - if lax cyber security standards result in loss of service under the government’s proposals to implement the eu's network and information systems (nis) directive from may ." £ m is chickenfeed compared to the damage, this only applies to critical infrastructure, and i don't see any evidence that the eu has levied any such fines. july , at : am david. said... all i could find for the us was this from : "the u.s department of health and human services has fined fresenius medical care holdings inc., a major supplier of medical equipment, $ . million for five separate data breaches that occurred in ." a derisory fine, years late, for losing control of physical devices containing health information. not exactly impressive. july , at : am hmtksteve said... how about keeping an air gap between critical systems and the internet? when companies used dedicated data circuits this kind of thing did not happen. too many accountants have veto powers over it and it shows. july , at : pm static said... the government has no business patronizing anyone to keep their it secure anymore than it has business telling them to lock their front door. july , at : am david. said... static, tell that to the fbi. alex hern reported april that fbi hacks vulnerable us computers to fix malicious malware: "the fbi has been hacking into the computers of us companies running insecure versions of microsoft software in order to fix them, the us department of justice has announced. the operation, approved by a federal court, involved the fbi hacking into “hundreds” of vulnerable computers to remove malware placed there by an earlier malicious hacking campaign, which microsoft blamed on a chinese hacking group known as hafnium." july , at : am david. said... the washington post's gerrit de vynck, rachel lerman, ellen nakashima and chris alcantara have a excellent explainer from days ago entitled the anatomy of a ransomware attack. july , at : am david. said... and the class action lawyers get in on the ransomware act. in first came the ransomware attacks, now come the lawsuits, gerrit de vynck reports on eddie darwich, a pioneering plaintiff: "now he’s suing colonial pipeline over those lost sales, accusing it of lax security. he and his lawyers are hoping to also represent the hundreds of other small gas stations that were hurt by the hack. it’s just one of several class-action lawsuits that are popping up in the wake of high-profile ransomware attacks. another lawsuit filed against colonial in georgia in may seeks damages for consumers who had to pay higher gas prices. a third is in the works, with law firm chimicles schwartz kriner & donaldson-smith llp pursuing a similar effort. and colonial isn’t the only company being sued. san diego-based hospital system scripps health is facing class-action lawsuits stemming from a ransomware attack in april." july , at : pm david. said... charlie osborne's updated kaseya ransomware attack faq: what we know now is a useful overview. july , at : am david. said... in the wake of major attacks, ransomware groups avaddon, darkside and revil went dark. now dan gooding reports that they may be re-branding themselves in haron and blackmatter are the latest groups to crash the ransomware party: "both groups say they are aiming for big-game targets, meaning corporations or other large businesses with the pockets to pay ransoms in the millions of dollars. ... as s w lab pointed out, the layout, organization, and appearance of [haron's] site are almost identical to those for avaddon, the ransomware group that went dark in june after sending a master decryption key to bleepingcomputer that victims could use to recover their data. ... recorded future, the record, and security firm flashpoint, which also covered the emergence of blackmatter, have questioned if the group has connections to either darkside or revil." july , at : am david. said... just a reminder that the ransomware threat has been being ignored for a long time. nearly five years ago i wrote asymmetric warfare. the first comment was: "ransomware is another example. sf muni has been unable to collect fares for days because their systems fell victim to ransomware. the costs to mount this attack are insignificant in comparison to the costs imposed on the victim. quinn norton reports: 'the pre­dic­tions for this year from some analy­sis is that we’ll hit seventy-five bil­lion in ran­somware alone by the end of the year. some esti­mates say that the loss glob­al­ly could be well over a tril­lion this year, but it’s hard to say what a real num­ber is.'" august , at : pm david. said... catalin cimpanu reports that accenture downplays ransomware attack as lockbit gang leaks corporate data: "fortune company accenture has fell victim to a ransomware attack but said today the incident did not impact its operations and has already restored affected systems from backups. news of the attack became public earlier this morning when the company’s name was listed on the dark web blog of the lockbit ransomware cartel. the lockbit gang claimed it gained access to the company’s network and was preparing to leak files stolen from accenture’s servers at : : gmt. ... just before this article was published, the countdown timer on the lockbit gang’s leak site also reached zero. following this event, the lockbit gang leaked accenture’s files, which, following a cursory review, appeared to include brochures for accenture products, employee training courses, and various marketing materials. no sensitive information appeared to be included in the leaked files." august , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ▼  july ( ) economics of evil revisited yet another dna storage technique alternatives to proof-of-work a modest proposal about ransomware intel did a boeing graphing china's cryptocurrency crackdown ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: economic model of long-term storage dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, august , economic model of long-term storage cost vs. kryder rate as i wrote last month in patting myself on the back, i started working on economic models of long-term storage six years ago. i got a small amount of funding from the library of congress; when that ran out i transferred the work to students at uc santa cruz's storage systems research center. this work was published here in and in later papers (see here). what i wanted was a rough-and-ready web page that would allow interested people to play "what if" games. what the students wanted was something academically respectable enough to get them credit. so the models accumulated lots of interesting details. but the details weren't actually useful. the extra realism they provided was swamped by the uncertainty from the "known unknowns" of the future kryder and interest rates. so i never got the rough-and-ready web page. below the fold, i bring the story up-to-date and point to a little web site that may be useful. earlier this year the internet archive asked me to update the numbers we had been working with all those years ago. and, being retired with time on my hands (not!), i decided instead to start again. i built an extremely simple version of my original economic model, eliminating all the details that weren't relevant to the internet archive and everything else that was too complex to implement at short notice, and put it behind an equally simple web site running on a raspberry pi (so please don't beat up on it). what this model does for a single terabyte of data, the model computes the endowment, the money which deposited with the terabyte and invested at interest would suffice to pay for the storage of the data "for ever" (actually years in this model). assumptions these are the less than totally realistic assumptions underlying the model: drive cost is constant, although each year the same cost buys drives with more capacity as given by the kryder rate. the interest rate and the kryder rate do not vary for the duration. the storage infrastructure consists of multiple racks, containing multiple slots for drives. i.e. the terabyte occupies a very small fraction of the infrastructure. the number of drive slots per rack is constant. ingesting the terabyte into the infrastructure incurs no cost. the failure rate of drives is constant and known in advance, so that exactly the right number of spare drives is included in each purchase to ensure that failed drives can be replaced by an identical drive. drives are replaced after their specified life although they are still working. some of these assumptions may get removed in the future (see below). parameters this model's adjustable parameters are as follows. media cost factors drivecost: the initial cost per drive, assumed constant in real dollars. driveterabyte: the initial number of tb of useful data per drive (i.e. excluding overhead). kryderrate: the annual percentage by which driveterabyte increases. drivelife: working drives are replaced after this many years. drivefailrate: percentage of drives that fail each year. infrastructure cost factors slotcost: the initial non-media cost of a rack (servers, networking, etc) divided by the number of drive slots. slotrate: the annual percentage by which slotcost decreases in real terms. slotlife: racks are replaced after this many years running cost factors slotcostperyear: the initial running cost per year (labor, power, etc) divided by the number of drive slots. laborpowerrate: the annual percentage by which slotcostperyear increases in real terms. replicationfactor: the number of copies. this need not be an integer, to account for erasure coding. financial factors discountrate: the annual real interest obtained by investing the remaining endowment. defaults the defaults are my invention for a rack full of tb drives. they should not be construed as representing the reality of your storage infrastructure. if you want to use the output of this model, for example for budgeting purposes, you need to determine your own values for the various parameters. default values parameter value units drivecost . initial $ driveterabyte . usable tb per drive kryderrate % per year drivelife years drivefailrate % per year slotcost . initial $ slotrate % per year slotlife years slotcostperyear . initial $ per year laborpowerrate % per year discountrate % per year replicationfactor # of copies unlike the kryderrate and the slotrate, the laborpowerrate reflects that the real cost of staff increases over time. of course, the capacity of the slots is typically increasing faster than the laborpowerrate, so the per-terabyte cost from the laborpowerrate still decreases over time. nevertheless, the endowment calculated is quite sensitive to the value of the laborpowerrate. calculation the model works through the -year duration year by year. each year it figures out the payments needed to keep the terabyte stored, including running costs and equipment purchases. it then uses the discountrate to figure out how much would have to have been invested at the start to supply that amount at that time. in other words, it computes the net present value of each year's expenditure and sums them to compute the endowment needed to pay for storage over the full duration. usage sample model output the web site provides two ways to use the model: provide a set of parameters including a discountrate and a kryderrate, and compute the model's estimate of the endowment. provide a set of parameters excluding the discountrate and the kryderrate, and draw a graph of how the model's estimate of the endowment varies with the discountrate and kryderrate for reasonable ranges of these two parameters. the sample graph shows why adding lots of detail to the model isn't really useful, because the effects of the unknowable future discountrate and kryderrate parameters are so large. code the code is here under an apache . license. what this model doesn't (yet) do if i can find the time, some of these deficiencies in the model may be removed: unlike earlier published research, this model ignores the cost of ingesting the data in the first place, and accessing it later. experience suggests the following rule of thumb: ingest is half the total lifetime cost, storage is one-third the total lifetime cost, and access is one-sixth. thus a reasonable estimate of the total preservation cost of a terabyte is three times the result of this model. the model assumes that the parameters are constant through time. historically, interest rates, the kryder rate, labor costs, etc. have varied, and thus should be modeled using monte carlo techniques and a probability distribution for each such parameter. it is possible for real interest rates to go negative, disk cost per terabyte to spike upwards, as it did after the thai floods, and so on. these low-probability events can have a large effect on the endowment needed, but are excluded from this model. fixing this needs more cpu power than a raspberry pi. there are a number of different possible policies for handling the inevitable drive failures, and different ways to model each of them. this model assumes that it is possible to predict at the time a batch of drives is purchased what proportion of them will fail, and inflates the purchase cost by that factor. this models the policy of buying extra drives so that failures can be replaced by the same drive model. the model assumes that drives are replaced after drivelife years even though they are working. continuing to use the drives beyond this can have significant effects on the endowment (see this paper). posted by david. at : am labels: storage costs comments: unknown said... nice post, and model, even if it can't be predictive. might want to throw in inflation. effect might be large, given the decade average in the us hasn't been lower than % for the last years. revised code, assumed to be buggy, here. rick august , at : pm david. said... rick, please read the post more carefully: "constant in real dollars" and: "annual real interest" the model works in real dollars, that is after adjusting for inflation. in other words, your idea of the average future rate of inflation needs to be subtracted from your idea of the kryderrate, slotrate and laborpowerrate in nominal dollars. adding an inflation parameter would be double-counting. august , at : pm unknown said... oops. apologies. i should've caught that by inference from your straw-man % discount rate, as well. august , at : pm david. said... i want to use the pi for something else, so i have taken the model down. if you need to use the model please install it on your own hardware from github: https://github.com/dshrosenthal/economicmodel if this isn't possible, post a comment and i'll see if i can resurrect the model. february , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ▼  august ( ) don't own cryptocurrencies recent comments widget why is the web "centralized"? economic model of long-term storage approaching the physical limits preservation is not a technical problem disk media market update ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. bibliographic wilderness bibliographic wilderness logging uri query params with lograge the lograge gem for taming rails logs by default will lot the path component of the uri, but leave out the query string/query params. for instance, perhaps you have a url to your app /search?q=libraries. lograge will log something like: method=get path=/search format=html&# ; the q=libraries part is completely left out of the log. i kinda &# ; continue reading logging uri query params with&# ;lograge &# ; notes on cloudfront in front of rails assets on heroku, with cors heroku really recommends using a cdn in front of your rails app static assets &# ; which, unlike in non-heroku circumstances where a web server like nginx might be taking care of it, otherwise on heroku static assets will be served directly by your rails app, consuming limited/expensive dyno resources. after evaluating a variety of options &# ; continue reading notes on cloudfront in front of rails assets on heroku, with&# ;cors &# ; activesupport::cache via activerecord (note to self) there are a variety of things written to use flexible back-end key/value datastores via the activesupport::cache api. for instance, say, activejob-status. i have sometimes in the past wanted to be able to use such things storing the data in an rdbms, say vai activerecord. make a table for it. sure, this won&# ;t be nearly as &# ; continue reading activesupport::cache via activerecord (note to&# ;self) &# ; heroku release phase, rails db:migrate, and command failure if you use capistrano to deploy a rails app, it will typically run a rails db:migrate with every deploy, to apply any database schema changes. if you are deploying to heroku you might want to do the same thing. the heroku &# ;release phase&# ; feature makes this possible. (introduced in , the release phase feature is &# ; continue reading heroku release phase, rails db:migrate, and command&# ;failure &# ; code that lasts: sustainable and usable open source code a presentation i gave at online conference code lib , on monday march . i have realized that the open source projects i am most proud of are a few that have existed for years now, increasing in popularity, with very little maintenance required. including traject and bento_search. while community aspects matter for open source sustainability, &# ; continue reading code that lasts: sustainable and usable open source&# ;code &# ; product management in my career working in the academic sector, i have realized that one thing that is often missing from in-house software development is &# ;product management.&# ; but what does that mean exactly? you don&# ;t know it&# ;s missing if you don&# ;t even realize it&# ;s a thing and people can use different terms to mean different roles/responsibilities. basically, &# ; continue reading product management &# ; rails auto-scaling on heroku we are investigating moving our medium-small-ish rails app to heroku. we looked at both the rails autoscale add-on available on heroku marketplace, and the hirefire.io service which is not listed on heroku marketplace and i almost didn&# ;t realize it existed. i guess hirefire.io doesn&# ;t have any kind of a partnership with heroku, but still uses &# ; continue reading rails auto-scaling on&# ;heroku &# ; managed solr saas options i was recently looking for managed solr &# ;software-as-a-service&# ; (saas) options, and had trouble figuring out what was out there. so i figured i&# ;d share what i learned. even though my knowledge here is far from exhaustive, and i have only looked seriously at one of the ones i found. the only managed solr options i &# ; continue reading managed solr saas&# ;options &# ; gem authors, check your release sizes most gems should probably be a couple hundred kb at most. i&# ;m talking about the package actually stored in and downloaded from rubygems by an app using the gem. after all, source code is just text, and it doesn&# ;t take up much space. ok, maybe some gems have a couple images in there. but if &# ; continue reading gem authors, check your release&# ;sizes &# ; every time you decide to solve a problem with code… every time you decide to solve a problem with code, you are committing part of your future capacity to maintaining and operating that code. software is never done. software is drowning the world by james abley updating solrcloud configuration in ruby we have an app that uses solr. we currently run a solr in legacy &# ;not cloud&# ; mode. our solr configuration directory is on disk on the solr server, and it&# ;s up to our processes to get our desired solr configuration there, and to update it when it changes. we are in the process of moving &# ; continue reading updating solrcloud configuration in&# ;ruby &# ; are you talking to heroku redis in cleartext or ssl? in &# ;typical&# ; redis installation, you might be talking to redis on localhost or on a private network, and clients typically talk to redis in cleartext. redis doesn&# ;t even natively support communications over ssl. (or maybe it does now with redis ?) however, the heroku redis add-on (the one from heroku itself) supports ssl connections via &# ;stunnel&# ;, &# ; continue reading are you talking to heroku redis in cleartext or&# ;ssl? &# ; comparing performance of a rails app on different heroku formations i develop a &# ;digital collections&# ; or &# ;asset management&# ; app, which manages and makes digitized historical objects and their descriptions available to the public, from the collections here at the science history institute. the app receives relatively low level of traffic (according to google analytics, around k pageviews a month), although we want it to be &# ; continue reading comparing performance of a rails app on different heroku&# ;formations &# ; deep dive: moving ruby projects from travis to github actions for ci so this is one of my super wordy posts, if that&# ;s not your thing abort now, but some people like them. we&# ;ll start with a bit of context, then get to some detailed looks at github actions features i used to replace my travis builds, with example config files and examination of options available. for &# ; continue reading deep dive: moving ruby projects from travis to github actions for&# ;ci &# ; unexpected performance characteristics when exploring migrating a rails app to heroku i work at a small non-profit research institute. i work on a rails app that is a &# ;digital collections&# ; or &# ;digital asset management&# ; app. basically it manages and provides access (public as well as internal) to lots of files and description about those files, mostly images. it&# ;s currently deployed on some self-managed amazon ec instances &# ; continue reading unexpected performance characteristics when exploring migrating a rails app to&# ;heroku &# ; faster_s _url: optimized s url generation in ruby subsequent to my previous investigation about s url generation performance, i ended up writing a gem with optimized implementations of s url generation. github: faster_s _url it has no dependencies (not even aws-sdk). it can speed up both public and presigned url generation by around an order of magnitude. in benchmarks on my macbook compared &# ; continue reading faster_s _url: optimized s url generation in&# ;ruby &# ; delete all s key versions with ruby aws sdk v if your s bucket is versioned, then deleting an object from s will leave a previous version there, as a sort of undo history. you may have a &# ;noncurrent expiration lifecycle policy&# ; set which will delete the old versions after so many days, but within that window, they are there. what if you were deleting &# ; continue reading delete all s key versions with ruby aws sdk&# ;v &# ; github actions tutorial for ruby ci on drifting ruby i&# ;ve been using travis for free automated testing (&# ;continuous integration&# ;, ci) on my open source projects for a long time. it works pretty well. but it&# ;s got some little annoyances here and there, including with github integration, that i don&# ;t really expect to get fixed after its acquisition by private equity. they also seem to &# ; continue reading github actions tutorial for ruby ci on drifting&# ;ruby &# ; more benchmarking optimized s presigned_url generation in a recent post, i explored profiling and optimizing s presigned_url generation in ruby to be much faster. in that post, i got down to using a aws::sigv ::signer instance from the aws sdk, but wondered if there was a bunch more optimization to be done within that black box. julik posted a comment on that &# ; continue reading more benchmarking optimized s presigned_url&# ;generation &# ; delivery patterns for non-public resources hosted on s i work at the science history institute on our digital collections app (written in rails), which is kind of a &# ;digital asset management&# ; app combined with a public catalog of our collection. we store many high-resolution tiff images that can be mb+ each, as well as, currently, a handful of pdfs and audio files. we &# ; continue reading delivery patterns for non-public resources hosted on&# ;s &# ; learning (lib)tech learning (lib)tech stories from my life as a technologist reflection: my third year at gitlab and becoming a non-manager leader wow, years at gitlab. since i left teaching, because almost all my jobs were contracts, i haven't been anywhere for more than years, so i find it interesting that my longest term is not only post-librarian-positions, but at a startup! year was full on pandemic year and it was a busy one. due to the travel restrictions, i took less vacation than previous years, and i'll be trying to make up for that a little by taking this week off. ubc ischool career talk series: journey from libtech to tech the ubc ischool reached out to me recently asking me to talk about my path from getting my library degree to ending up working in a tech company. below is the script for my portion of the talk, along with a transcription of the questions i answered. context to provide a bit of context (and &# ; continue reading "ubc ischool career talk series: journey from libtech to&# ;tech" choosing not to go into management (again) often, to move up and get a higher pay, you have to become a manager, but not everyone is suited to become a manager, and sometimes given the preference, it&# ;s not what someone wants to do. thankfully at gitlab, in every engineering team including support, we have two tracks: technical (individual contributor), and management. progression &# ; continue reading "choosing not to go into management&# ;(again)" prioritization in support: tickets, slack, issues, and more i mentioned in my gitlab reflection that prioritization has been quite different working in support compared to other previous work i&# ;ve done. in most of my previous work, i&# ;ve had to take &# ;desk shifts&# ; but those are discreet where you&# ;re focused on providing customer service during that period of time and you can focus on &# ; continue reading "prioritization in support: tickets, slack, issues, and&# ;more" reflection part : my second year at gitlab and on becoming senior again this reflection is a direct continuation of part of my time at gitlab so far. if you haven&# ;t, please read the first part before beginning this one. becoming an engineer ( months) the more time i spent working in support, the more i realized that the job was much more technical than i originally &# ; continue reading "reflection part : my second year at gitlab and on becoming senior&# ;again" reflection part : my first year at gitlab and becoming senior about a year ago, i wrote a reflection on summit and contribute, our all staff events, and later that year, wrote a series of posts on the gitlab values and culture from my own perspective. there is a lot that i mention in the blog post series and i&# ;ll try not to repeat myself (too &# ; continue reading "reflection part : my first year at gitlab and becoming&# ;senior" is blog reading dead? there was a bit more context to the question, but a friend recently asked me: what you do think? is blogging dead? i think blogging the way it used to work is (mostly) dead. back in the day, we had a bunch of blogs and people who subscribe to them via email and rss feeds. &# ; continue reading "is blog reading&# ;dead?" working remotely at home as a remote worker during a pandemic i&# ;m glad that i still have a job, that my life isn&# ;t wholly impacted by the pandemic we&# ;re in, but to say that nothing is different just because i was already a remote worker would be wrong. the effect the pandemic is having on everyone around you has affects your life. it seems obvious to &# ; continue reading "working remotely at home as a remote worker during a&# ;pandemic" code libbc lightning talk notes: day code libbc day lightning talk notes! code club for adults/seniors &# ; dethe elza richmond public library, digital services technician started code clubs, about years ago used to call code and coffee, chain event, got little attendance had code codes for kids, teens, so started one for adults and seniors for people who have done &# ; continue reading "code libbc lightning talk notes: day&# ; " code libbc lightning talk notes: day code libbc day lightning talk notes! scraping index pages and vufind implementation &# ; louise brittain boisvert systems librarian at legislative collection development policy: support legislators and staff, receive or collect publications, many of them digital but also some digitized (mostly pdf, but others) accessible via link in marc record previously, would create an index page &# ; continue reading "code libbc lightning talk notes: day&# ; " the open library blog the open library blog a web page for every book open library tags explained—for readers seeking buried treasure as part of an open-source project, the open library blog has a growing number of contributors: from librarians and developers to designers, researchers, and book lovers. each contributor writes from their perspective, sharing contributions they&# ;re making to the open library catalog. as such, the open library blog has a versatile tagging system to help patrons [&# ;] introducing the open library explorer try it here! if you like it, share it. bringing years of librarian-knowledge to life by nick norman with drini cami &# ; mek at the library leaders forum (demo), open library unveiled the beta for what it&# ;s calling the library explorer: an immersive interface which powerfully recreates and enhances the experience of navigating [&# ;] importing your goodreads & accessing them with open library’s apis by mek today joe alcorn, founder of readng, published an article (https://joealcorn.co.uk/blog/ /goodreads-retiring-api) sharing news with readers that amazon&# ;s goodreads service is in the process of retiring their developer apis, with an effective start date of last tuesday, december th, . the topic stirred discussion among developers and book lovers alike, making the front-page of the [&# ;] on bookstores, libraries & archives in the digital age the following was a guest post by brewster kahle on against the grain (atg) &# ; linking publishers, vendors, &# ; librarians by: brewster kahle, founder &# ; digital librarian, internet archive ​​​back in ,&# ;i was honored to give a keynote at the meeting of the&# ;society of american archivists, when the president of the society presented me with a [&# ;] amplifying the voices behind books with the power of data exploring how open library uses author data to help readers move from imagination to impact by nick norman, edited by mek &# ; drini according to rené descartes, a creative mathematician, “the reading of all good books is like a conversation with the finest [people] of past centuries.” if that’s true, then who are some of [&# ;] giacomo cignoni: my internship at the internet archive this summer, open library and the internet archive took part in google summer of code (gsoc), a google initiative to help students gain coding experience by contributing to open source projects. i was lucky enough to mentor giacomo while he worked on improving our bookreader experience and infrastructure. we have invited giacomo to write a [&# ;] google summer of code : adoption by book lovers by tabish shaikh &# ; mek openlibrary.org,the world’s best-kept library secret: let’s make it easier for book lovers to discover and get started with open library. hi, my name is tabish shaikh and this summer i participated in the google summer of code program with open library to develop improvements which will help book lovers discover [&# ;] open library for language learners by guyrandy jean-gilles - - a quick browse through the app store and aspiring language learners will find themselves swimming in useful programs. but for experienced linguaphiles, the never-ending challenge is finding enough raw content and media to consume in their adopted tongue. open library can help. earlier this year, open library added reading levels to [&# ;] meet the librarians of open library by lisa seaberg are you a book lover looking to contribute to a warm, inclusive library community? we’d love to work with you: learn more about volunteering @ open library behind the scenes of open library is a whole team of developers, data scientists, outreach experts, and librarians working together to make open library better [&# ;] re-thinking open library’s book pages by mek karpeles, tabish shaikh we&# ;ve redesigned our book pages: before →after. please share your feedback with us. a web page for every book&# ; this is the mission of open library: a free, inclusive, online digital library catalog which helps readers find information about any book ever published. millions of books in open library&# ;s catalog [&# ;] the thingology blog the thingology blog new syndetics unbound feature: mark and boost electronic resources proquest and librarything have just introduced a major new feature to our catalog-enrichment suite, syndetics unbound, to meet the needs of libraries during the covid- crisis. our friends at proquest blogged about it briefly on the proquest blog. this blog post goes into greater detail about what we did, how we did it, and what [&# ;] introducing syndetics unbound short version today we&# ;re going public with a new product for libraries, jointly developed by librarything and proquest. it&# ;s called syndetics unbound, and it makes library catalogs better, with catalog enrichments that provide information about each item, and jumping-off points for exploring the catalog. to see it in action, check out the hartford public library [&# ;] alamw in boston (and free passes)! abby and kj will be at ala midwinter in boston this weekend, showing off librarything for libraries. since the conference is so close to librarything headquarters, chances are good that a few other lt staff members may appear, as well! visit us. stop by booth # to meet abby &# ; kj (and potential mystery guests!), [&# ;] for ala : three free opac enhancements for a limited time, librarything for libraries (ltfl) is offering three of its signature enhancements for free! there are no strings attached. we want people to see how librarything for libraries can improve your catalog. check library. the check library button is a &# ;bookmarklet&# ; that allows patrons to check if your library has a book [&# ;] ala in san francisco (free passes) our booth. but this is kate, not tim or abby. she had the baby. tim and i are headed to san francisco this weekend for the ala annual conference. visit us. stop by booth # to talk to us, get a demo, and learn about all the new and fun things we&# ;re up to with [&# ;] new &# ;more like this&# ; for librarything for libraries we&# ;ve just released &# ;more like this,&# ; a major upgrade to librarything for libraries’ &# ;similar items&# ; recommendations. the upgrade is free and automatic for all current subscribers to librarything for libraries catalog enhancement package. it adds several new categories of recommendations, as well as new features. we&# ;ve got text about it below, but here&# ;s a short [&# ;] subjects and the ship of theseus i thought i might take a break to post an amusing photo of something i wrote out today: the photo is a first draft of a database schema for a revamp of how librarything will do library subjects. all told, it has tables. gulp. about eight of the tables do what a good cataloging [&# ;] librarything recommends in bibliocommons does your library use bibliocommons as its catalog? librarything and bibliocommons now work together to give you high-quality reading recommendations in your bibliocommons catalog. you can see some examples here. look for &# ;librarything recommends&# ; on the right side. not that kind of girl (daniel boone regional library) carthage must be destroyed (ottowa public library) the [&# ;] new: annotations for book display widgets our book display widgets is getting adopted by more and more libraries, and we&# ;re busy making it better and better. last week we introduced easy share. this week we&# ;re rolling out another improvement—annotations! book display widgets is the ultimate tool for libraries to create automatic or hand-picked virtual book displays for their home page, blog, [&# ;] send us a programmer, win $ , in books. we just posted a new job post job: library developer at librarything (telecommute). to sweeten the deal, we are offering $ , worth of books to the person who finds them. that&# ;s a lot of books. rules! you get a $ , gift certificate to the local, chain or online bookseller of your choice. to qualify, you [&# ;] on stake | ethereum foundation blog created with sketch. search categories r&d research & development devcon devcon org organizational esp ecosystem support eth.org ethereum.org sec security archive aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, may, apr, mar, feb, jan dec, nov, oct, sep, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec on stake posted by vitalik buterin on july , research & development the topic of mining centralization has been a very important one over the past few weeks. ghash.io, the bitcoin network’s largest mining pool, has for the past month directed over % of the bitcoin network’s hashpower, and two weeks ago briefly spiked over %, theoretically giving it monopoly control over the bitcoin network. although miners quidkly left the pool and reduced its hashpower to %, it’s clear that the problem is not solved. at the same time, asics threaten to further centralize the very production . one approach to solving the problem is the one i advocated in my previous post: create a mining algorithm that is guaranteed to remain cpu-friendly in the long term. another, however, is to abolish mining entirely, and replace it with a new model for seeking consensus. the primary second contender to date has been a strategy called “proof of stake”, the intuition behind which is as follows. in a traditional proof-of-work blockchain, miners “vote” on which transactions came at what time with their cpu power, and the more cpu power you have the proportionately larger your influence is. in proof-of-stake, the system follows a similar but different paradigm: stakeholders vote with their dollars (or rather, the internal currency of the particular system). in terms of how this works technically, the simplest setup is a model that has been called the “simulated mining rig”: essentially, every account has a certain chance per second of generating a valid block, much like a piece of mining hardware, and this chance is proportional to the account’s balance. the simplest formula for this is: sha (prevhash + address + timestamp) <= ^ * balance / diff prevhash is the hash of the previous block, address is the address of the stake-miner, timestamp is the current unix time in second, balance is the account balance of the stack-miner and diff is an adjustable global difficulty parameter. if a given account satisfies this equation at any particular second, it may produce a valid block, giving that account some block reward. another approach is to use not the balance, but the “coin age” (ie. the balance multiplied by the amount of time that the coins have not been touched), as the weighting factor; this guarantees more even returns but at the expense of potentially much easier collusion attacks, since attackers have the ability to accumulate coin age, and possible superlinearity; for these reasons, i prefer the plain balance-based approach in most cases, and we will use this as our baseline for the rest of this discussion. other solutions to “proof of x” have been proposed, including excellence, bandwidth, storage and identity, but none are particularly convenient as consensus algorithms; rather, all of these systems have many of the same properties of proof of stake, and are thus best implemented indirectly - by making them purely mechanisms for currency distribution, and then using proof of stake on those distributed coins for the actual consensus. the only exception is perhaps the social-graph-theory based ripple, although many cryptocurrency proponents consider such systems to be far too trust-dependent in order to be considered truly “decentralized”; this point can be debated, but it is best to focus on one topic at a time and so we will focus on stake. strengths and weaknesses if it can be implemented correctly, in theory proof of stake has many advantages. in particular are three: it does not waste any significant amount of electicity. sure, there is a need for stakeholders to keep trying to produce blocks, but no one gains any benefit from making more than one attempt per account per second; hence, the electricity expenditure is comparable to any other non-wasteful internet protocol (eg. bittorrent) it can arguably provide a much higher level of security. in proof of work, assuming a liquid market for computing power the cost of launching a % attack is equal to the cost of the computing power of the network over the course of two hours - an amount that, by standard economic principles, is roughly equal to the total sum of block rewards and transaction fees provided in two hours. in proof of stake, the threshold is theoretically much higher: % of the entire supply of the currency. depending on the precise algorithm in question it can potentially allow for much faster blockchains (eg. nxt has one block every few seconds, compared to one per minute for ethereum and one per ten minutes for bitcoin) note that there is one important counterargument that has been made to # : if a large entity credibly commits to purchasing % of currency units and then using those funds to repeatedly sabotage the network, then the price will fall drastically, making it much easier for that entity to puchase the tokens. this does somewhat mitigate the benefit of stake, although not nearly fatally; an entity that can credibly commit to purchasing % of coins is likely also one that can launch % attacks against proof of work. however, with the naive proof of stake algorithm described above, there is one serious problem: as some bitcoin developers describe it, “there is nothing at stake”. what that means is this: in the context of a proof-of-work blockchain, if there is an accidental fork, or a deliberate transaction reversal (“double-spend”) attempt, and there are two competing forks of the blockchain, then miners have to choose which one they contribute to. their three choices are either: mine on no chain and get no rewards mine on chain a and get the reward if chain a wins mine on chain b and get the reward if chain b wins as i have commented in a previous post, note the striking similarity to schellingcoin/truthcoin here: you win if you go with what everyone else goes with, except in this case the vote is on the order of transactions, not a numerical (as in schellingcoin) or binary (as in truthcoin) datum. the incentive is to support the chain that everyone else supports, forcing rapid convergence, and preventing successful attacks provided that at least % of the network is not colluding. in the naive proof of stake algorithm, on the other hand, the choices of whether or not to vote on a and whether or not to vote on b are independent; hence, the optimal strategy is to mine on any fork that you can find. thus, in order to launch a successful attack, an attacker need only overpower all of the altruists who are willing to vote only on the correct chain. the problem is, unfortunately, somewhat fundamental. proof of work is nice because the property of hash verification allows the network to be aware of something outside of itself - namely, computing power, and that thing serves as a sort of anchor to ensure some stability. in a naive proof of stake system, however, the only thing that each chain is aware of is itself; hence, one can intuitively see that this makes such systems more flimsy and less stable. however, the above is merely an intuitive argument; it is by no means a mathematical proof that a proof-of-stake system cannot be incentive-compatible and secure, and indeed there are a number of potential ways to get around the issue. the first strategy is the one that is employed in the slasher algorithm, and it hinges on a simple realization: although, in the case of a fork, chains are not aware of anything in the outside world, they are aware of each other. hence, the way the protocol prevents double-mining is this: if you mine a block, the reward is locked up for blocks, and if you also mine on any other chain then anyone else can submit the block from the other chain into the original chain in order to steal the mining reward. note, however, that things are not quite so simple, and there is one catch: the miners have to be known in advance. the problem is that if the algorithm given above is used directly, then the issue arises that, using a probabilistic strategy, double mining becomes very easy to hide. the issue is this: suppose that you have % stake, and thus every block there is a % chance that you will be able to produce (hereinafter, “sign”) it. now, suppose there is a fork between chain a and chain b, with chain a being the “correct” chain. the “honest” strategy is to try to generate blocks just on a, getting an expected . a-coins per block. an alternative strategy, however, is to try to generate blocks on both a and b, and if you find a block on both at the same time then discarding b. the payout per block is one a-coin if you get lucky on a ( . % chance), one b-coin if you get lucky on b ( . % chance) and one a-coin, but no b-coins, if you get lucky on both; hence, the expected payout is . a-coins plus . b-coins if you double-vote. if the stakeholders that need to sign a particular block are decided in advance, however (ie. specifically, decided before a fork starts), then there is no possibility of having the opportunity to vote on a but not b; you either have the opportunity on both or neither. hence, the “dishonest” strategy simply collapses into being the same thing as the “honest” strategy. the block signer selection problem but then if block signers are decided in advance, another issue arises: if done wrong, block signers could “mine” their blocks, repeatedly trying to create a block with different random data until the resulting block triggers that same signer having the privilege to sign a block again very soon. for example, if the signer for block n+ was simply chosen from the hash of block n, and an attacker had % stake, then the attacker could keep rebuilding the block until block n+ also had the attacker as its signer (ie. an expected iterations). over time, the attacker would naturally gain signing privilege on other blocks, and thus eventually come to completely saturate the blockchain with length- cycles controlled by himself. even if the hash of blocks put together is used, it’s possible to manipulate the value. thus, the question is, how do we determine what the signers for future blocks are going to be? the solution used in slasher is to use a secure decentralized random number generator protocol: many parties come in, first submit to the blockchain the hashes of their values, and then submit their values. there is no chance of manipulation this way, because each submitter is bound to submit in the second round the value whose hash they provided in the first round, and in the first round no one has enough information in order to engage in any manipulation. the player still has a choice of whether or not to participate in the second round, but the two countervailing points are that ( ) this is only one bit of freedom, although it becomes greater for large miners that can control multiple accounts, and ( ) we can institute a rule that failing to participate causes forfeiture of one’s mining privilege (miners in round n choose miners for round n+ during round n- , so there is an opportunity to do this if certain round-n miners misbehave during this selection step). another idea, proposed by iddo bentov and others in their “cryptocurrencies without proof of work” paper, is to use something called a “low-influence” function - essentially, a function such that there is only a very low chance that a single actor will be able to change the result by changing the input. a simple example of an lif over small sets is majority rule; here, because we are trying to pick a random miner, we have a very large set of options to choose from, so majority rule per bit is used (eg. if you have parties and you want to pick a random miner out of a billion, assign them into thirty groups of , and have each group vote on whether their particular bit is zero or one, and then recombine the bits as a binary number at the end). this removes the need for a complicated two-step protocol, allowing it to potentially be done much more quickly and even in parallel, reducing the risk that the pre-chosen stake-miners for some particular block will get together and collude. a third interesting strategy, used by nxt, is to use the addresses of the stake-miners for blocks n and n+ to choose the miner for block n+ ; this by definition gives only one choice for the next miner in each block. adding a criterion that every miner needs to be locked in for blocks in order to participate prevents sending transactions as a form of double-mining. however, having such rapid stake-miner selection also compromises the nothing-at-stake resistance property due to the probabilistic double-mining problem; this is the reason why clever schemes to make miner determination happen very quickly are ultimately, beyond a certain point, undesirable. long-range attacks while the slasher approach does effectively solve the nothing-at-stake problem against traditional % attacks, a problem arises in the case of something called a “long-range attack”: instead of an attacker starting mining from ten blocks before the current block, the attacker starts ten thousand blocks ago. in a proof-of-work context, this is silly; it basically means doing thousands of times as much work as is necessary to launch an attack. here, however, creating a block is nearly computationally free, so it’s a reasonable strategy. the reason why it works is that slasher’s process for punishing multi-mining only lasts for blocks, and its process for determining new miners lasts blocks, so outside the “scope” of that range slasher functions exactly like the naive proof-of-stake coin. note that slasher is still a substantial improvement; in fact, assuming users never change it can be made fully secure by introducing a rule into each client not to accept forks going back more than blocks. the problem is, however, what happens when a new user enters the picture. when a new user downloads a proof-of-stake-coin client for the first time, it will see multiple versions of the blockchain: the longest, and therefore legitimate, fork, and many pretenders trying to mine their own chains from the genesis. as described above, proof-of-stake chains are completely self-referential; hence, the client seeing all of these chains has no idea about any surrounding context like which chain came first or which has more value (note: in a hybrid proof-of-stake plus social graph system, the user would get initial blockchain data from a trusted source; this approach is reasonable, but is not fully decentralized). the only thing that the client can see is the allocation in the genesis block, and all of the transactions since that point. thus, all “pure” proof-of-stake systems are ultimately permanent nobilities where the members of the genesis block allocation always have the ultimate say. no matter what happens ten million blocks down the road, the genesis block members can always come together and launch an alternate fork with an alternate transaction history and have that fork take over. if you understand this, and you are still okay with pure proof of stake as a concept (the specific reason why you might still be okay is that, if the initial issuance is done right, the “nobility” should still be large enough that it cannot practically collude), then the realization allows for some more imaginative directions in terms of how proof of stake can play out. the simplest idea is to have the members of the genesis block vote on every block, where double-mining is punished by permanent loss of voting power. note that this system actually solves nothing-at-stake issues completely, since every genesis block holder has a mining privilege that has value forever into the future, so it will always not be worth it to double-mine. this system, however, has a finite lifespan - specifically, the maximum life (and interest) span of the genesis signers, and it also gives the nobility a permanent profit-making privilege, and not just voting rights; however, nevertheless the existence of the algorithm is encouraging because it suggests that long-range-nothing-at-stake might be fundamentally resolvable. thus, the challenge is to figure out some way to make sure voting privileges transfer over, while still at the same time maintaining security. changing incentives another approach to solving nothing-at-stake comes at the problem from a completely different angle. the core problem is, in naive proof-of-stake, rational individuals will double-vote. the slasher-like solutions all try to solve the problem by making it impossible to double-vote, or at the very least heavily punishing such a strategy. but what if there is another approach; specifically, what if we instead remove the incentive to do so? in all of the proof of stake systems that i described above, the incentive is obvious, and unfortunately fundamental: because whoever is producing blocks needs an incentive to participate in the process, they benefit if they include a block in as many forks as possible. the solution to this conundrum comes from an imaginative, out-of-the-box proposal from daniel larimer: transactions as proof of stake. the core idea behind transactions as proof-of-stake is simple: instead of mining being done by a separate class of individuals, whether computer hardware owners or stakeholders, mining and transaction sending are merged into one. the naive tapos algorithm is as follows: every transaction must contain a reference (ie. hash) to the previous transaction a candidate state-of-the-system is obtained by calculating the result of a resulting transaction chain the correct chain among multiple candidates is the one that has either (i) the longest coin-days-destroyed (ie. number of coins in the account * time since last access), or (ii) the highest transaction fees (these are two different options that we will analyze separately) this algorithm has the property that it is extremely unscalable, breaking down beyond about transaction per - seconds, and it is not the one that larimer suggests or the one that will actually be used; rather, it’s simply a proof of concept that we will analyze to see if this approach is valid at all. if it is, then there are likely ways to optimize it. now, let’s see what the economics of this are. suppose that there is a fork, and there are two competing versions of the tapos chain. you, as a transaction sender, made a transaction on chain a, and there is now an upcoming chain b. do you have the incentive to double-mine and include your transaction in chain b as well? the answer is no - in fact you actually want to double-spend your recipient so you would not put the transaction on another chain. this argument is especially potent in the case of long-range attacks, where you already received your product in exchange for the funds; in the short term, of course, the incentive still exists to make sure the transaction is sent, so senders do have the incentive to double-mine; however, because the worry is strictly time-limited this can be resolved via a slasher-like mechanism. one concern is this: given the presence of forks, how easy is it to overwhelm the system? if, for example, there is a fork, and one particular entity wants to double-spend, under what circumstances is that possible? in the transaction-fee version, the requirement is pretty simple: you need to spend more txfees than the rest of the network. this seems weak, but in reality it isn’t; we know that in the case of bitcoin, once the currency supply stops increasing mining will rely solely on transaction fees, and the mechanics are exactly the same (since the amount that the network will spend on mining will roughly correspond to the total number of txfees being sent in); hence, fee-based tapos is in this regard at least as secure as fee-only pow mining. in the second case, we have a different model: instead of mining with your coins, you are mining with your liquidity. anyone can % attack the system if and only if they have a sufficiently large quantity of coin-days-destroyed on them. hence, the cost of spending a large txfee after the fact is replaced by the cost of sacrificing liquidity before the fact. cost of liquidity the discussion around liquidity leads to another important philosophical point: security cannot be cost-free. in any system where there is a block reward, the thing that is the prerequisite for the reward (whether cpu, stake, or something else) cannot be free, since otherwise everyone would be claiming the reward at infinitum, and in tapos transaction senders need to be providing some kind of fee to justify security. furthermore, whatever resource is used to back the security, whether cpu, currency sacrifices or liquidity sacrifices, the attacker need only get their hands on the same quantity of that resource than the rest of the network. note that, in the case of liquidity sacrifices (which is what naive proof of stake is), the relevant quantity here is actually not % of coins, but rather the privilege of accessing % of coins for a few hours - a service that, assuming a perfectly efficient market, might only cost a few hundred thousand dollars. the solution to this puzzle is that marginal cost is not the same thing as average cost. in the case of proof of work, this is true only to a very limited extent; although miners do earn a positive nonzero profit from mining, they all pay a high cost (unless they’re cpu miners heating their homes, but even there there are substantial efficiency losses; laptops running hash functions at %, though effective at heating, are necessarily less effective than systems designed for the task). in the case of currency sacrifices, everyone pays the same, but the payment is redistributed as a dividend to everyone else, and this profit is too dispersed to be recovered via market mechanisms; thus, although the system is costly from a local perspective, it is costless from a global perspective. the last option, liquidity sacrifice, is in between the two. although liquidity sacrifice is costly, there is a substantial amount of disparity in how much people value liquidity. some people, like individual users or businesses with low savings, heavily value liquidity; others, like savers, do not value liquidity at all (eg. i could not care less if i lost the ability to spend ten of my bitcoins for some duration). hence, although the marginal cost of liquidity will be high (specifically, necessarily equal to either the mining reward or the transaction fee), the average cost is much lower. hence, there is a leverage effect that allows the cost of an attack to be much higher than the inefficiency of the network, or the amount that senders spend on txfees. additionally, note that in larimer’s scheme specifically, things are rigged in such a a way that all liquidity that is sacrificed in consensus is liquidity that was being sacrificed anyway (namely, by not sending coins earlier), so the practical level of inefficiency is zero. now, tapos does have its problems. first, if we try to make it more scalable by reintroducing the concept of blocks, then there ideally needs to be some reason to produce blocks that is not profit, so as not to reintroduce the nothing-at-stake problem. one approach may be to force a certain class of large transaction senders to create blocks. second, attacking a chain is still theoretically “cost-free”, so the security assurances are somewhat less nice than they are in proof-of-work. third, in the context of a more complicated blockchain like ethereum, and not a currency, some transactions (eg. finalizing a bet) are actually profitable to send, so there will be incentive to double-mine on at least some transactions (though not nearly all, so there is still some security). finally, it’s a genesis-block-nobility system, just like all proof-of-stake necessarily is. however, as far as pure proof-of-stake systems go, it does seem a much better backbone than the version of proof of stake that emulated bitcoin mining. hybrid proof of stake given the attractiveness of proof of stake as a solution for increasing efficiency and security, and its simultaneous deficiencies in terms of zero-cost attacks, one moderate solution that has been brought up many times is hybrid proof of stake, in its latest incantation called “proof of activity”. the idea behind proof of activity is simple: blocks are produced via proof of work, but every block randomly assigns three stakeholders that need to sign it. the next block can only be valid once those signatures are in place. in this system, in theory, an attacker with % stake would see of his blocks not being signed, whereas in the legitimate network out of blocks would be signed; hence, such an attacker would be penalized in mining by a factor of . however, there is a problem: what motivates signers to sign blocks on only one chain? if the arguments against pure proof of stake are correct, then most rational stake-miners would sign both chains. hence, in hybrid pos, if the attacker signs only his chain, and altruists only sign the legitimate chain, and everyone else signs both, then if the attacker can overpower the altruists on the stake front that means that the attacker can overtake the chain with less than a % attack on the mining front. if we trust that altruists as a group are more powerful in stake than any attacker, but we don’t trust that too much, then hybrid pos seems like a reasonable hedge option; however, given the reasoning above, if we want to hybridize one might ask if hybrid pow + tapos might not be the more optimal way to go. for example, one could imagine a system where transactions need to reference recent blocks, and a blockchain’s score is calculated based on proof of work and coin-days-destroyed counts. conclusion will we see proof of stake emerge as a viable alternative to proof of work in the next few years? it may well be. from a pure efficiency perspective, if bitcoin, or ethereum, or any other pow-based platform get to the point where they have similar market cap to gold, silver, the usd, eur or cny, or any other mainstream asset, then over a hundred billion dollars worth of new currency units will be produced per year. under a pure-pow regime, an amount of economic power approaching that will be spent on hashing every year. thus, the cost to society of maintaining a proof-of-work cryptocurrency is about the same as the cost of maintaining the russian military (the analogy is particularly potent because militaries are also proof of work; their only value to anyone is protecting against other militaries). under hybrid-pos, that might safely be dropped to $ billion per year, and under pure pos it would be almost nothing, except depending on implementation maybe a few billion dollars of cost from lost liquidity. ultimately, this boils down to a philosophical question: exactly how much does decentralization mean to us, and how much are we willing to pay for it? remember that centralized databases, and even quasi-centralized ones based on ripple consensus, are free. if perfect decentralization is indeed worth $ billion, then proof of work is definitely the right way to go. but arguably that is not the case. what if society does not see decentralization as a goal in itself, and the only reason why it’s worth it to decentralize is to get the increased benefits of efficiency that decentralization brings? in that case, if decentralization comes with a $ billion price tag, then we should just centralize and let a few governments run the databases. but if we have a solid, viable proof of stake algorithm, then we have a third option: a system which is both decentralized and cost-free (note that useful proof of work also fits this criterion, and may be easier); in that case, the dichotomy does not exist at all and decentralization becomes the obvious choice. previous post next post rss email me facebook github twitter ethereum foundation • ethereum.org • esp • bug bounty program • do-not-track • archive categories: research & development • devcon • organizational • ecosystem support • ethereum.org • security original theme by beautiful-jekyll, modified by the ethereum foundation team. what i learned today… skip to the content search what i learned today... menu about me publications & presentations library mashups the accidental systems librarian open source software for libraries my presenting/learning calendar blog archives search search for: close search close menu about me publications & presentationsshow sub menu library mashups the accidental systems librarian open source software for libraries my presenting/learning calendar blog archives facebook twitter linkedin categories about me taking a break post author by nicole c. baratta post date may , no comments on taking a break i’m sure those of you who are still reading have noticed that i haven’t been updating this site much in the past few years. i was sharing my links with you all but now delicious has started adding ads to that. i’m going to rethink how i can use this site effectively going forward. for now you can read my regular content on opensource.com at https://opensource.com/users/nengard. share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for may , post author by nicole c. baratta post date may , no comments on bookmarks for may , today i found the following resources and bookmarked them on delicious. start a fire grow and expand your audience by recommending your content within any link you share digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for april , post author by nicole c. baratta post date april , no comments on bookmarks for april , today i found the following resources and bookmarked them on delicious. mattermost mattermost is an open source, self-hosted slack-alternative mblock program your app, arduino projects and robots by dragging & dropping fidus writer fidus writer is an online collaborative editor especially made for academics who need to use citations and/or formulas. beek social network for booklovers open ebooks open ebooks is a partnership between digital public library of america, the new york public library, and first book, with content support from digital books distributor baker & taylor. digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for february , post author by nicole c. baratta post date february , no comments on bookmarks for february , today i found the following resources and bookmarked them on delicious. connfa open source ios & android app for conferences & events paperless scan, index, and archive all of your paper documents foss serve foss serve promotes student learning via participation in humanitarian free and open source software (foss) projects. disk inventory x disk inventory x is a disk usage utility for mac os x . (and later). it shows the sizes of files and folders in a special graphical way called “treemaps”. loomio loomio is the easiest way to make decisions together. loomio empowers organisations and communities to turn discussion into action, wherever people are. democracyos democracyos is an online space for deliberation and voting on political proposals. it is a platform for a more open and participatory government. the software aims to stimulate better arguments and come to better rulings, as peers. digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for january , post author by nicole c. baratta post date january , no comments on bookmarks for january , today i found the following resources and bookmarked them on delicious. superpowers the open source, extensible, collaborative html d+ d game maker sequel pro sequel pro is a fast, easy-to-use mac database management application for working with mysql databases. digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for december , post author by nicole c. baratta post date december , no comments on bookmarks for december , today i found the following resources and bookmarked them on delicious. open broadcaster software free, open source software for live streaming and recording digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for november , post author by nicole c. baratta post date november , no comments on bookmarks for november , today i found the following resources and bookmarked them on delicious. numfocus foundation numfocus promotes and supports the ongoing research and development of open-source computing tools through educational, community, and public channels. digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for november , post author by nicole c. baratta post date november , no comments on bookmarks for november , today i found the following resources and bookmarked them on delicious. smore smore makes it easy to design beautiful and effective online flyers and newsletters. ninite install and update all your programs at once digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for november , post author by nicole c. baratta post date november , no comments on bookmarks for november , today i found the following resources and bookmarked them on delicious. vim adventures learning vim while playing a game digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest categories link sharing bookmarks for november , post author by nicole c. baratta post date november , no comments on bookmarks for november , today i found the following resources and bookmarked them on delicious. star wars: building a galaxy with code digest powered by rss digest share this: email twitter facebook tumblr linkedin reddit pocket pinterest posts navigation ← newer posts … older posts → facebook twitter linkedin search for: tags amazon android ato blogging chrome cil cil cil cil cil code lib facebook feedburner firefox gmail google il il il il il koha kohacon kohacon kohacon libraries mapping nfais njla open source oscon oscon php pinterest rss sla sla special libraries association sxsw twitter valenj webinar windows wordpress zotero learn library mashups learn systems librarianship learn open source my sites library mashups practical open source for libraries the accidental systems librarian © what i learned today… powered by wordpress to the top ↑ up ↑ send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. dshr's blog: the economist on cryptocurrencies dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, august , the economist on cryptocurrencies the economist edition dated august th has a leader (unstablecoins) and two articles (in the finance section (the disaster scenario and here comes the sheriff). source the leader argues that: regulators must act quickly to subject stablecoins to bank-like rules for transparency, liquidity and capital. those failing to comply should be cut off from the financial system, to stop people drifting into an unregulated crypto-ecosystem. policymakers are right to sound the alarm, but if stablecoins continue to grow, governments will need to move faster to contain the risks. but even the economist gets taken in by the typical cryptocurrency hype, balancing current actual risks against future possible benefits: yet it is possible that regulated private-sector stablecoins will eventually bring benefits, such as making cross-border payments easier, or allowing self-executing “smart contracts”. regulators should allow experiments whose goal is not merely to evade financial rules. they don't seem to understand that, just as the whole point of uber is to evade the rules for taxis, the whole point of cryptocurrency is to "evade financial rules". below the fold i comment on the two articles. here comes the sheriff this article is fairly short and mostly describes the details of gary gensler's statement in three "buckets": the first is about investor protection: the regulator claims jurisdiction over the crypto assets that it defines as securities; issuers of these must provide disclosures and abide by other rules. the sec‘s definition uses a number of criteria, including the “howey test”, which asks whether investors have a stake in a common enterprise and are led to expect profits from the efforts of a third party. bitcoin and ether, the two biggest cryptocurrencies, do not meet this criterion (they are commodities, under american law). but mr gensler thinks that ... a fair few probably count as securities—and do not follow the rules. these, he said, may include stablecoins ... some of which may represent a stake in a crypto platform. mr gensler asked congress for more staff to police them. the second is about new products: for months the sec has sat on applications for bitcoin etfs and related products, filed by big wall street names like goldman sachs and fidelity. mr gensler hinted that, in order to be approved, these may have to comply with the stricter laws governing mutual funds. the third is a request for new legal powers needed to pave over cracks in regulation that cryptocurrencies, whose whole point is to "evade financial rules", are exploiting: mr gensler is chiefly concerned with platforms engaged in crypto trading or lending as well as in decentralised finance (defi), where smart contracts replicate financial transactions without a trusted intermediary. some of these, he said, may host tokens that should be regulated as securities; others could be riddled with scams. the sec is likely to encounter massive opposition to these ideas. how cryptocurrency became a powerful force in washington by todd c. frankel et al reports on the flow of lobbying dollars from cryptocurrency insiders to capitol hill and how it is blocking progress on the current infrastructure bill: and after years of debate over how to improve america’s infrastructure, and months of sensitive negotiations between the white house and lawmakers, the $ trillion bipartisan infrastructure proposal suddenly stalled in part because of concerns about how government would regulate an industry best known for wild financial speculation, memes — and its role in ransomware attacks. ... regardless of the measure’s ultimate fate, the fact that crypto regulation has become one of the biggest stumbling blocks to passage of the bill underscored how the industry has become a political force in washington — and previewed a series of looming battles over a financial technology attracting billions of dollars of interest from wall street, silicon valley and financial players around the world, but that few still understand. it gets worse. kate riga reports that in fit of pique, shelby kills crypto compromise: sen. richard shelby (r-al) killed a hard-earned cryptocurrency compromise amendment to the bipartisan infrastructure bill because his own amendment, to beef up the defense budget another $ billion, was rejected by sen. bernie sanders (i-vt). shelby had tried to tack it on to the cryptocurrency amendment. ... so that’s basically it for the crypto amendment, which took the better part of the weekend for senators and the white house and hammer into a compromise. the issue here was that the un-amended bill would require: some cryptocurrency companies that provide a service “effectuating” the transfer of digital assets to report information on their users, as some other financial firms are required to do, in an effort to enforce tax compliance. crypto supporters said the provision’s wording would seemingly apply to companies that have no ability to collect data on users, such as cryptocurrency miners, and could push a swath of the industry overseas. so maybe by accident mining is money transmission. the disaster scenario this article is far longer and far more interesting. it takes the form of a "stress test", discussing a scenario in which bitcoin's "price" goes to zero and asking what the consequences for the broader financial markets and investors would be. it is hard to argue with the conclusion: still, our extreme scenario suggests that leverage, stablecoins, and sentiment are the main channels through which any crypto-downturn, big or small, will spread more widely. and crypto is only becoming more entwined with conventional finance. goldman sachs plans to launch a crypto exchange-traded fund; visa now offers a debit card that pays customer rewards in bitcoin. as the crypto-sphere expands, so too will its potential to cause wider market disruption. the article identifies a number of channels by which a bitcoin collapse could "cause market disruption": via the direct destruction of paper wealth for hodl-ers and actual losses for more recent purchasers. via the stock price of companies, including cryptocurrency exchanges, payments companies, and chip companies such as nvidia. via margin calls on leveraged investments, either direct purchases of bitcoin or derivatives. via redemptions of stablecoins causing reserves to be liquidated. via investor sentiment contagion from cryptocurrencies to other high-risk assets such as meme stocks, junk bonds, and spacs. i agree that these are all plausible channels, but i have two main issues with the article. issue # : tether source first, it fails to acknowledge that the spot market in bitcoin is extremely thin (a sell order for btc crashed the "price" by %), especially compared to the x larger market in bitcoin derivatives, and that the "price" of bitcoin and other cryptocurrencies is massively manipulated, probably via the "wildcat bank" of tether. the article contains, but doesn't seem to connect, these facts: fully % of the money invested in bitcoin is spent on derivatives like “perpetual” swaps—bets on future price fluctuations that never expire. most of these are traded on unregulated exchanges, such as ftx and binance, from which customers borrow to make bets even bigger. ... the extent of leverage in the system is hard to gauge; the dozen exchanges that list perpetual swaps are all unregulated. but “open interest” ... has grown from $ . bn in march to $ bn today. this is not a perfect proxy for total leverage, as it is not clear how much collateral stands behind the various contracts. but forced liquidations of leveraged positions in past downturns give a sense of how much is at risk. on may th alone, as bitcoin lost nearly a third of its value, they came to $ bn. ... because changing dollars for bitcoin is slow and costly, traders wanting to realise gains and reinvest proceeds often transact in stablecoins, which are pegged to the dollar or the euro. such coins, the largest of which are tether and usd coin, are now worth more than $ bn. on some crypto platforms they are the main means of exchange. source that last paragraph is misleading. fais kahn writes: binance also hosts a massive perpetual futures market, which are “cash-settled” using usdt. this allows traders to make leveraged bets of x margin or more...which, in laymen’s terms, is basically a speculative casino. that market alone provides around ~$ b of daily volume, where users deposit usdt to trade on margin. as a result, binance is by far the biggest holder of usdt, with $ b sitting in its wallet. bernhard meuller writes: a more realistic estimate is that ~ % of the tether supply ( . b usdt) is located on centralized exchanges. interestingly, only a small fraction of those usdt shows up in spot order books. one likely reason is that a large share is sitting on wallets to collateralize derivative positions, in particular perpetual futures. ... it’s important to understand that usdt perpetual futures implementations are % usdt-based, including collateralization, funding and settlement. so on the exchange that dominates bitcoin derivative trading, where the majority of "fully % of the money invested in bitcoin" lives, usdt is the exclusive means of exchange. the entire market's connection to the underlying spot market is that: prices are tied to crypto asset prices via clever incentives, but in reality, usdt is the only asset that ever changes hands between traders. other than forced liquidations, the article does not analyze how the derivative market would respond to a massive drop in the bitcoin "price", and whether tether could continue to pump the "price". as money market funds did in the global financial crisis, the article suggests that stablecoins would have problems: issuers back their stablecoins with piles of assets, rather like money-market funds. but these are not solely, or even mainly, held in cash. tether, for instance, says % of its assets were held in commercial paper, % in secured loans and % in corporate bonds, funds and precious metals at the end of march. a cryptocrash could lead to a run on stablecoins, forcing issuers to dump their assets to make redemptions. in july fitch, a rating agency, warned that a sudden mass redemption of tethers could “affect the stability of short-term credit markets”. it is certainly true that the off-ramps from cryptocurrencies to fiat are constricted; that is a major reason for the existence of stablecoins. but fais kahn makes two points: if there were a sudden drop in the market, and investors wanted to exchange their usdt for real dollars in tether’s reserve, that could trigger a “bank run” where the value dropped significantly below one dollar, and suddenly everyone would want their money. that could trigger a full on collapse. but when that might actually happen? when bitcoin falls in the frequent crypto bloodbaths, users actually buy tether - fleeing to the safety of the dollar. this actually drives tether’s price up! and: tether’s own terms of service say users may not be redeemed immediately. forced to wait, many users would flee to bitcoin for lack of options, driving the price up again. it isn't just tether that doesn't allow winnings out. carol alexander's binance’s insurance fund is a fascinating, detailed examination of binance's extremely convenient "outage" as btc crashed on may . her subhead reads: how insufficient insurance funds might explain the outage of binance’s futures platform on may and the potentially toxic relationship between binance and tether. i certainly don't understand all the ramifications of the "toxic relationship between binance and tether", but the article's implicit assumption that they, and similar market particiapants, behave like properly regulated financial institutions is implausible. alexander's take on the relationship, on the other hand, is alarmingly plausible: in may ... tether reported that only . % of all tokens are actually backed by cash reserves and about % is in commercial paper, a form of unsecured debt that is normally only issued by firms with high-quality debt ratings. he simultaneous growth of binance and tether begs the question whether binance itself is the issuer of a large fraction of tether’s $ billion commercial paper. binance's b b platform is the main online broker for tether. suppose binance is in financial difficulties (possibly precipitated by using its own money rather than insurance funds to cover payment to counterparties of liquidated positions). then the tether it orders and gives to customers might not be paid for with dollars, or bitcoin or any other form of cash, but rather with an iou. that is, commerical paper on which it pays tether interest, until the term of the loan expires. no new tether has been issued since binance's order of $ bn [correction aug: net $ bn transfer] was made highly visible to the public on may. [correction: aug: another $ bn tether was issued on aug]. maybe this is because tether's next audit is imminent, and the auditers may one day investigate the identity of the issuers of the % (or more, now) of commercial paper it has for reserves. if it were found that the main issuer was binance (maybe followed by ftx) then the entire crypto asset market place would have been holding itself up with its own bootstraps! this would certainly explain why matt levine wrote: there is a fun game among financial journalists and other interested observers who try to find anyone who has actually traded commercial paper with tether, or any of its actual holdings. the game is hard! as far as i know, no one has ever won it, or even scored a point; i have never seen anyone publicly identify a security that tether holds or a counterparty that has traded commercial paper with it. if tether's reserves were % composed of unsecured debt from unregulated exchanges like binance ... issue # : dynamic effects my second problem with the article is that this paragraph shows the economist sharing two common misconceptions about blockchain technology: a crash would puncture the crypto economy. bitcoin miners—who compete to validate transactions and are rewarded with new coins—would have less incentive to carry on, bringing the verification process, and the supply of bitcoin, to a halt. first, it is true that, were the "price" of bitcoin zero, mining would stop. but if mining stops, it is transactions that stop. bitcoin hodl-ings would be frozen in place, not just worth zero on paper but actually useless because nothing could be done with them. second, the idea that the goal of mining is to create new bitcoin is simply wrong. the goal of mining is to secure the blockchain by making sybil attacks implausibly expensive. the creation of new bitcoin is a side-effect, intended to motivate miners to make the blockchain secure by making sybil attacks implausibly expensive. the fact that nakamoto intended mining to continue after the final bitcoin had been created clearly demonstrates this. the article is based on this scenario: in order to grasp the growing links between the crypto-sphere and mainstream markets, imagine that the price of bitcoin crashes all the way to zero. a rout could be triggered either by shocks originating within the system, say through a technical failure, or a serious hack of a big cryptocurrency exchange. or they could come from outside: a clampdown by regulators, for instance, or an abrupt end to the “everything rally” in markets, say in response to central banks raising interest rates. but, as the article admits, a discontinuous change from $ k or so to $ is implausible. a rapid but continuous drop over, say, a month is more plausible, and it could bring issues that the article understandably fails to address. source as the "price" drops two effects take place. first, the value of the mining reward in fiat currency decreases. the least efficient and least profitable miners become uneconomic and drop out, decreasing the hash rate and thus increasing the block time and reducing the rate at which transactions can be processed: typically, it takes about minutes to complete a block, but feinstein told cnbc the bitcoin network has slowed down to - to -minute block times. this effect occurred during the chinese government's crackdown, as shown in the graph of hash rate. source second, every blocks (about two weeks) the algorithm adjusts, in this case decreases, the difficulty and thus the cost of mining the next blocks. the idea is to restore the block time to about minutes despite the reduction in the hash rate. when the chinese crackdown took . % of bitcoin's hash power off-line, the algorithm made the biggest reduction in difficulty in bitcoin's history. in our scenario, bitcoin plunges over a month. lets assume it starts just after a difficulty adjustment. the month is divided into two parts, with the initial difficulty for the first part, and a much reduced difficulty for the second part. in the first part the rapid "price" decrease makes all but the most efficient miners uneconomic, so the hash rate decreases rapidly and block production slows rapidly. producing the -th block takes a lot more than two weeks. this is a time when the demand for transactions will be extreme, but during this part the supply of transactions is increasingly restricted. this, as has happened in other periods of high transaction demand, causes transaction fees to spike to extraordinary levels. in normal times fees are less than % of miner income, but it is plausible that they would spike an order of magnitude or more, counteracting the drop in the economics of mining. but median fees of say $ would increase the sense of panic in the spot market. lets assume that, by the -th block, that more than half the mining power has been rendered uneconomic, so that the block time is around minutes. thus the adjustment comes after three weeks. when it happens, the adjustment, being based on the total time taken in the first part, will be large but inadequate to correct for the reduced hash rate at the end of the first part. with our assumptions the adjustment will be for a % drop in hash power, but the actual drop will have been %. block production will speed up, but only to about minutes/block. given the panic, fees will drop somewhat but remain high. as the adjustment appraoches there are a lot of disgruntled miners, whose investment in asic mining rigs has been rendered uneconomic. the rigs can't be repurposed for anything but other proof-of-work cryptocurrencies, which have all crashed because, as the article notes: investors would probably also dump other cryptocurrencies. recent tantrums have shown that where bitcoin goes, other digital monies follow, says philip gradwell of chainalysis, a data firm. recall that what the mining power is doing is securing the blockchain against attack. once it became possible to rent large amounts of mining power, % attacks on minor alt-coins became endemic. for example, there were three successful attacks on ethereum classic in a single month. before the adjustment, some fraction of the now-uneconomic bitcoin mining power has migrated to the rental market. even a small fraction can overwhelm other cryptocurrencies. as i write, the bitcoin hash rate is around m th/s. dogecoin is the next largest "market cap" coin using bitcoin-like proof-of-work. its hash rate is around th/s, or , times smaller. thus during the first part there was a tidal wave of attacks against every other proof-of-work cryptocurrency. it has never been possible to rent enugh mining power to attack a major cryptocurrency. but now we have more than % of the bitcoin mining power sitting idle on the sidelines desperate for income. these miners have choices: they can resume mining bitcoin. the more efficient of them can do so and still make a profit, but if they all do most will find it uneconomic. they can mine other proof-of-work cryptocurrencies. but even if only a tiny fraction of them do so, it will be uneconomic. and trust in the alt-coins has been destroyed by the wave of attacks. they can collaborate to mount double-spend attacks against bitcoin, since they have more than half the mining power. they can collaborate to mount the kind of sabotage attack described by eric budish in the economic limits of bitcoin and the blockchain, aiming to profit by shorting bitcoin in the derivative market and destroying confidence in the asset's security. the security of proof-of-work blockchains depends upon the unavailability of enough mining power to mount an attack. a massive, sustained drop in the value of bitcoin would free up enormous amounts of mining power, far more than enough to destroy any smaller cryptocurrency, and probably enough to destroy bitcoin. posted by david. at : am labels: bitcoin comments: david. said... barbie says "smart contracts are hard". cross-chain defi site poly network hacked; hundreds of millions potentially lost by eliza gkritsi and muyao shen starts: "cross-chain decentralized finance (defi) platform poly network was attacked on tuesday, with the alleged hacker draining roughly $ million in crypto. poly network, a protocol launched by the founder of chinese blockchain project neo, operates on the binance smart chain, ethereum and polygon blockchains. tuesday’s attack struck each chain consecutively, with the poly team identifying three addresses where stolen assets were transferred." august , at : pm david. said... in poly network hack analysis – largest crypto hack, mudit gupyta provides a detailed analysis of the poly network heist: "poly network is a blockchain interoperability project that allows people to send transactions across blockchains. one of their key use case is a cross-blockchain bridge that allows you to move assets from one blockchain to another by locking tokens on one blockchain and unlocking them on a different one. the attacker managed to unlock tokens on various blockchains without locking the corresponding amounts on other blockchains." david gerard comments: "poly network asked ethereum miners and exchanges to intercede and block the hacker’s addresses for them — sort of giving the game away that crypto miners are transaction processors." nicholas weaver would agree. according to tim copeland's at least $ million stolen in massive cross-chain hack: "blockchain security firm slowmist has sent out a news alert that says they have already tracked down the attacker's id. it claims to know their email address, ip information and device fingerprint. the firm said that the attacker's original funds were in monero (xmr), which were exchanged for bnb, eth and matic and other tokens that were used to fund the attack." august , at : pm post a comment older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ▼  august ( ) the economist on cryptocurrencies stablecoins part ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: the optimist's telescope: review dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, september , the optimist's telescope: review the fundamental problem of digital preservation is that, although it is important and we know how to do it, we don't want to pay enough to have it done. it is an example of the various societal problems caused by rampant short-termism, about which i have written frequently. bina venkataraman has a new book on the topic entitled the optimist's telescope: thinking ahead in a reckless age. robert h. frank reviews it in the new york times: how might we mitigate losses caused by shortsightedness? bina venkataraman, a former climate adviser to the obama administration, brings a storyteller’s eye to this question in her new book, “the optimist’s telescope.” she is also deeply informed about the relevant science. the telescope in her title comes from the economist a.c. pigou’s observation in that shortsightedness is rooted in our “faulty telescopic faculty.” as venkataraman writes, “the future is an idea we have to conjure in our minds, not something that we perceive with our senses. what we want today, by contrast, we can often feel in our guts as a craving.” she herself is the optimist in her title, confidently insisting that impatience is not an immutable human trait. her engaging narratives illustrate how people battle and often overcome shortsightedness across a range of problems and settings. below the fold, some thoughts upon reading the book. the plot of isaac asimov's foundation trilogy evolves as a series of "seldon crises", in which simultaneous internal and external crises combine to force history into the path envisioned by psychohistorian hari seldon and the foundations he established with the aim of reducing the duration of the dark ages after the fall of the galactic empire from , to , years. the world today feels as though it is undergoing a seldon crisis, with external (climate change) and internal (inequality, the rise of quasi-fascist leaders of "democracies") crises reinforcing each other.  what is lacking is a foundation charting a long-term future that minimizes the dark ages to come after the fall of civilization. what ties the various current crises together is short-termism; all levels of society being incapable of long-term thinking, and failing to resist eating the marshmallow. in her introduction venkataraman writes: i argue in this book that many decisions are made in the presence of information about future consequences but in the absence of good judgement. we try too hard to know the exact future and do too little to be ready for its many possibilities. the result is an epidemic of recklessness, a colossal failure to plan ahead. ... to act on behalf of our future selves can be hard enough; to act on behalf of future neighbors, communities, countries of the planet can seem impossible, even if we aspire to that ideal. by contrast, it is far easier to respond to an immediate threat. she divides her book into three parts, and in each deploys an impressive range of examples of the problems caused by lack of foresight. but it is an optimistic book, because in each part she provides techniques for applying foresight and examples of their successful application. part : individual and family dorian, the second-worst north atlantic hurricane ever, was ravaging the bahamas as i read part 's discussion of why, despite early and accurate warnings, people fail to evacuate or take appropriate precautions for hurricanes and other natural disasters: it is human nature to rely on mental shortcuts and gut feelings - more than gauges of the odds - to make decisions. ... these patterns of thinking, i have learned, explain why all the investment on better predictions can fall short of driving decisions about the future ... the threats that people take most seriously turn out to be those we can most vividly imagine. she illustrates why this is hard using the collapse of microfinance in andra pradesh: a person who might look reckless when poor could look smart and strategic when flush. realizing that people who are lacking resources often have a kind of tunnel vision for the present helped me understand why many women involved in india's microfinance crisis went against their own future interest, taking on too many loans and falling deep into debt. it also explains why the poorest families have more trouble heeding hurricane predictions. the problem on the lending side of the collapse was "be careful what you measure". the microfinance companies were measuring the number of new loans, and the low default rate, not noticing that the new loans were being used to pay off old ones. the same phenomenon of scarcity causing recklessness helps explain why black kids in schools suffer more severe discipline: in exasperated moments, impulsive decisions reflecting ingrained biases become more likely. teachers, like all of us, are exposed to portrayals in the media and popular culture of black people as criminals, and those images shape unconscious views and actions. university of oregon professors kent mcintosh and erik girvan call these moments of discipline in schools "vulnerable decision points." they track discipline incidents in schools around the country and analyze the data to show school administrators and teachers are often predictable. when teachers are fatigued at the end of a school day  or week, or hungry after skipping lunch for meetings, they are more likley to make rash decisions. ... this bears out the link eldar shafir and sendhil mullainathan have shown between scarcity - in this case, of time and attention - and reckless decision making. it is similar to the pattern that hamstrings the poor from saving for their future. among the techniques she discusses for imagining the future are virtual reality experiences, and simpler techniques such as: an annual gathering where each person writes his own obituary and reads it aloud to the group. prototype clock another is danny hillis' , year clock: the clock idea captivated those whom hillis told about it, including futurist and technology guru stewart brand and the musician brian eno. and me. iirc it was at the hacker's conference where hillis and brand talked about the idea of the clock. the presentation set me thinking about the long-term future of digital information, and about how systems to provide it needed to be ductile rather than, like byzantine fault tolerance, brittle. the lockss program was the result a couple of years later. ernie via wikipedian geni another technique she calls "glitter bombs" - you'll need to read the book to find out why. the uk's premium bonds and other prize-linked savings schemes are examples: the british government launched its premium bonds program in , to encourage savings after world war ii. for the past seven decades, between and percent of uk citizens have held the bonds at any given time. the savers accept lower guaranteed returns than comparable government bonds in exchange for the prospect of winning cash prizes during monthly drawings. tufano's research shows that people who save under these schemes typically do so not instead of saving elsewhere but instead of gambling. as kids, my brother and i routinely received small premium bonds as birthday or christmas gifts. i recall watching on tv as "ernie" chose winners, but i don't recall ever being one. part : businesses and organizations venkataraman introduces this part thus: the unwitting ways that organizations encourage reckless decisions may pose an even greater threat, however, than the cheating we find so repulsive. the work of john graham at the national bureau of economic research puts eye-popping scandals into perspective. he has shown that more money is lost for shareholders of corporations ... by the routine, legal habit of executives making bad long-term decisions to boost near-term profits than what is siphoned off by corporate fraud. among her examples of organizational short-termism are the dust bowl, gaming no child left behind by "teaching to the test", over-prescribing of antibiotics, over-fishing, and the global financial crisis (gfc). for each, she discusses examples of successful, albeit small-scale, mitigations: the dust bowl was caused by the economic incentives, still in place, for farmers to aggressively till their soil to produce more annual crops. she describes how people are developing perennial crops, needing much less tilling and irrigation: perennial grains, unlike annuals, burrow thick roots ten to twenty feet deep into the ground. plants with such entrenched roots don't require much irrigation and they withstand drought better. perennial roots clench the fertile topsoil like claws and keep it from washing away. this makes it possible for a rich soil microbiome to thrive that helps crops use nutrients more efficiently. a field of perennials does not need to be plowed each year, and so more carbon remains trapped in its soil instead of escaping to the atmosphere but: to get perennial grains into production, jackson also had to figure out how to overcome farmers' aversion to taking risks on unknown crops, and their immediate fears of not having buyers for their product. researchers from the land institute and university of minnesota have brokered deals for twenty farmers to plant fields with a perennial grain that resembles wheat. they persuaded the farmers by securing buyers willing to pay a premium for the grain this is an impressive demonstration of making "what lasts over time pay in the short run", but scaling up to displace annual grains in the market is an exercise left to the reader. montessori and similar educational philosophies (e.g. reggio emilia early childhood education) are known to be effective alternatives to the testing-based no child left behind. but they aren't as easy to measure, and thus to justify deploying widely. so this is what we get: other reports have documented how "teaching to the test" curtails student curiosity, and how it has even driven some teachers and principals to cheat by correcting student answers. the metric might work for organizations at the bottom of the heap, but not for those near the top. organizations at the bottom of the heap have low-hanging fruit, so they can see how to improve. it is much more difficult for organizations near the top to see how to improve, so the temptation to cheat is greater. doctors have been effective at curbing over-prescribing by their colleagues using an in-person, patient-specific "postgame rehash" when suspect prescriptions are detected. but: the drawback is that it requires a lot of time and legwork, and even hospitals with antibiotic stewardship teams lack the resources to do this across an entire hospital year-round. so although this approach works, it can't scale up to match the problem of over-prescribing in hospitals, let alone by gps. and it clearly can't deal with the even more difficult problem of agricultural over-use of antibiotics. attempts to reduce over-fishing by limiting fishing days and landings haven't been effective. they lead to intensive, highly competitive "derby days" during which immature fish are killed and dumped, and prices are crashed because the entire quota arrives on the market at the same time. instead, the approach of "catch shares", in effect giving the fishermen equity in the fishery, has driven the gulf coast red snapper fishery back from near-extinction: the success of catch shares shows that agreements to organize businesses - and wise policy - can encourage collective foresight. programs that align future interests with the present can, in the words of buddy guindon, turn pirates into stewards. it isn't clear that it would have been possible to implement catch shares before the fishery faced extinction. the global financial crisis of was driven by investors' monomaniacal focus on quarterly results, and thus executives monomaniacal focus on manipulating them to enhance their stock options and annual bonuses. she responds with the story of eagle capital management, a patient value investment firm which, after enduring years of sub-par performance, flourished during and after the dot-com bust: eagle fared well and way outperformed the plummeting markets in and . in just those two years, the gains more than made up for the losses of the previous five. today, the company has grown to manage more than $ billion in assets and, on average, earned an annual return of more than percent on its investments between and . that's more than double the annual return from the s&p during that time. some of my money is managed by a firm with a similar investment strategy, so i can testify to the need for patience and a long-term view. value investing has been out of favor during the recovery from the gfc. note that the whole reason for eagle's success was that most competitors were doing something different; if everyone had been taking eagle's long view the gfc wouldn't have happened but eagle would have been a run-of-the-mill performer. source she examines long-lived biological systems, including: the pando aspen colony in utah, ... is more than eighty thousand years old, and it has persisted by virtue of self-propogation - cloning itself - and by slow migration to fulfill its needs for water and nutrients from the soil. it even survived the volcaninic winter spurred by the massive eruption seventy-five thousand years ago on sumatra. ... its strategy - making lots of copies of itself - is one echoed by digital archivist david rosenthal ... lots of copies dispersed to different environments and organizations, rosenthal told me, is the only viable survival route for the ideas and records of the digital age, rhizocarpon geographicum source she is right that systems intended to survive for the long term needs high levels of redundancy, and low levels of correlation. she also points out another thing they need: another secret of some of the oldest living things on earth is slow growth. sussman documents what are known as map lichens in greenland, specimens at least three thousand years old that have grown one centimeter every hundred years - a hundred times slower than the pace of continental drift. the need to force systems to operate relatively slowly by imposing rate limits is something that i've written about several times (as has paul vixie), for example in brittle systems: the design goal of almost all systems is to do what the user wants as fast as possible. this means that when the bad guy wrests control of the system from the user, the system will do what the bad guy wants as fast as possible. doing what the bad guy wants as fast as possible pretty much defines brittleness in a system; failures will be complete and abrupt. rate limits are essential in the lockss system. another of her examples is also about rate limits. gregg popovich, coach of the san antonio spurs: pioneered the practice of keeping star players out of games for rest to prevent later injuries. harrison's h phantom photographer the equivalent of "glitter bombs" in this part are prizes, the earliest success and perhaps the most famous is the longitude prize, a £ , prize that motivated john harrison's succession of marine chronometers (preserved in working order at the royal greenwich observatory). more recent successful prizes include the x-prize spaceship and darpa's prizes kick-starting autonomous car technology. but note that none of the recent successful prizes have spawned technologies relevant to solving the seldon crisis we face. one interesting technique she details is "prospective hindsight": in contrast to the more common practice of describing what will happen in the future, prospective hindsight requires assuming something already happened and trying to explain why. this shifts people's focus away from mere prediction of future events and toward evaluating the consequences of their current choices. in the early days of vitria technology, my third startup, we worked with fedex. on of the many impressive things about the company was their morning routine of reviewing the events of the previous hours to enumerate everything that had gone wrong, and identify the root causes. explaining why is an extremely valuable process. part : communities and society some of the examples in this part, such as the warnings of potential for terrorism at the munich olympics, the siting of the fukushima reactors, the extraordinary delay in responding to the ebola outbreak: e-mails later published by the associated press revealed that officials knew of the potential danger and scope of the epidemic months before the designation [of a global emergency], and were warned of its scale by doctors without borders .. the world health organization's leaders, however, were worried about declaring the emergency because of the possible damage to the economies of the countries at the epicenter of the outbreak. and the indian ocean tsunami: after dr. smith dharmasaroja, the head meteorologist of thailand, advocated in for creating a network of sirens to warn of incoming tsunamis in the indian ocean, the ruling government replaced him. his superiors argued that a coastal warning system might deter tourists, as they would see thailand as unsafe. six years later, a massive indian ocean tsunami killed more than , people including thousands in coastal thailand, many of them tourists. show how the focus on short-term costs has fatal consequences. one reason is "social discounting", the application of a discount rate to estimated future costs to reduce them to a "net present value". this technique might have value in purely economic computations, although as i pointed out in two sidelights on short-termism: in practice it gives wrong answers: i've often referred to the empirical work of haldane & davies and the theoretical work of farmer and geanakoplos, both of which suggest that investors using discounted cash flow (dcf) to decide whether an investment now is justified by returns in the future are likely to undervalue the future. ... now harvard's greenwood & shleifer, in a paper entitled expectations of returns and expected returns, reinforce this ... they compare investors' beliefs about the future of the stock market as reported in various opinion surveys, with the outputs of various models used by economists to predict the future based on current information about stocks. they find that when these models, all enhancements to dcf of one kind or another, predict low performance investors expect high performance, and vice versa. if they have experienced poor recent performance and see a low market, they expect this to continue and are unwilling to invest. if they see good recent performance and a high market they expect this to continue. their expected return from investment will be systematically too high, or in other words they will suffer from short-termism. but as applied to investments in preventing future death and disaster these techniques uniformly fail, partly because they undervalue human life, and partly because they underestimate the risk of death and disaster, because they cannot enumerate all the potential risks. peter schwartz founded the global business network, which has decades of experience running scenario planning exercises for major corporations. he: has discovered that people are tempted to try to lock in on a single possible scenario that they prefer or see as the most likely and simply plan for that - defeating the purpose of scenario generation. the purpose being, of course, to get planners to think about the long tail of "black swan" events. the optimistic examples in this part are interesting, especially her account of the fight against the proposed green diamond development in the floodplain of richland county, south carolina, and jared watson's eagle scout project to educate the citizens of mattapoisett, massachusetts about the risk of flooding. but, as she recounts: in each of these instances, a community's size, or at least its cultural continuity between past and present, has made it easier to create and steward collective heirlooms. similarly, of the hundreds of stone markers dedicated to past tsunamis in japan, the two that were heeded centuries later were both in small villages, where oral tradition and school education reinforced the history and passed down the warning over time. coda venkataraman finishes with an optimistic coda pointing to the work of paul bain and his colleagues who: demonstrated that even climate deniers could be persuaded of the need for "environmental citizenship" if the actions to be taken, such as reducing carbon emissions, were framed as improvements in the way people would treat one another in the imagined future. a collective idea of the future in which people work together on environmental problems, and are more caring and considerate - or a future with greater economic and technological progress - motivated the climate change deniers to support such actions even when they didn't believe that human-caused climate change was a problem. she enumerates the five key lessons she takes away from her work on the book: look beyond near-term targets. we can avoid being distracted by short-term noise and cultivate patience by measuring more than immediate results. stoke the imagination. we can boost our ability to envision the range of possibilities that lie ahead. create immediate rewards for future goals. we can find ways to make what's best for us over time pay off in the present. direct attention away from immediate urges. we can reengineer cultural and environmental cues that condition us for urgency and instant gratification. demand and design better institutions. we can create practices, laws and institutions that foster foresight. my reaction it is hard not to be impressed by the book's collection of positive examples, but it is equally hard not to observe that in each case there are great difficulties in scaling them up to match the threats we face. and, in particular, there is a difficulty they all share that is inherent in venkataraman's starting point: i argue in this book that many decisions are made in the presence of information about future consequences but in the absence of good judgement. the long history of first the tobacco industry's and subsequently the fossil fuel industry's massive efforts to pollute the information environment cast great doubt on the idea that "decisions are made in the presence of information about future consequences" if those consequences affect oligopolies. and research is only now starting to understand how much easier it is for those who have benefited from the huge rise in economic inequality to use social media to the same ends. as just one example: now researchers led by penn biologist joshua b. plotkin and the university of houston’s alexander j. stewart have identified another impediment to democratic decision making, one that may be particularly relevant in online communities. in what the scientists have termed “information gerrymandering,” it’s not geographical boundaries that confer a bias but the structure of social networks, such as social media connections. reporting in the journal nature, the researchers first predicted the phenomenon from a mathematical model of collective decision making, and then confirmed its effects by conducting social network experiments with thousands of human subjects. finally, they analyzed a variety of real-world networks and found examples of information gerrymandering present on twitter, in the blogosphere, and in u.s. and european legislatures. “people come to form opinions, or decide how to vote, based on what they read and who they interact with,” says plotkin. “and in today’s world we do a lot of sharing and reading online. what we found is that the information gerrymandering can induce a strong bias in the outcome of collective decisions, even in the absence of ‘fake news.’ in this light nathan j. robinson's the scale of what we're up against makes depressing reading: it can be exhausting to realize just how much money is being spent trying to make the world a worse place to live in. the koch brothers are often mentioned as bogeymen, and invoking them can sound conspiratorial, but the scale of the democracy-subversion operation they put together is genuinely quite stunning. jane mayer, in dark money, put some of the pieces together, and found that the charles koch foundation had subsidized “pro-business, antiregulatory, and antitax” programs at over institutes of higher education. that is to say, they endowed professorships and think tanks that pumped out a constant stream of phony scholarship. they established the mercatus center at george mason university, a public university in virginia. all of these professors, “grassroots” groups, and think tanks are dedicated to pushing a libertarian ideology that is openly committed to creating a neo-feudal dystopia. the kochs provide just a small part of the resources devoted to polluting the information environment. social networks, as the cambridge analytica scandal shows, have greatly improved the productivity of these resources. i'm sorry to end my review of an optimistic book on a pessimistic note. but i'm an engineer, and much of engineering is about asking what could possibly go wrong? posted by david. at : am comments: david. said... facing the great reckoning head-on, danah boyd's speech accepting one of this year's barlow awards from the eff, is a must-read. it is in its own way another plea for longer-term thinking: "whether we like it or not, the tech industry is now in the business of global governance. “move fast and break things” is an abomination if your goal is to create a healthy society. taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. in a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. in a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. in a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic." september , at : pm david. said... last october, alex nevala-lee made the same point about hari seldon in what isaac asimov taught us about predicting the future: "asimov later acknowledged that psychohistory amounted to a kind of emotional reassurance: “hitler kept winning victories, and the only way that i could possibly find life bearable at the time was to convince myself that no matter what he did, he was doomed to defeat in the end.” the notion was framed as a science that could predict events centuries in advance, but it was driven by a desire to know what would happen in the war over the next few months — a form of wishful thinking that is all but inevitable at times of profound uncertainty. before the last presidential election, this impulse manifested itself in a widespread obsession with poll numbers and data journalism" september , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ▼  september ( ) boeing max: two competing views promising new hard disk technology google's fenced garden interesting articles from usenix the optimist's telescope: review ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. coffee|code: dan scott's blog - coding coffee|code: dan scott's blog - coding librarian · developer our nginx caching proxy setup for evergreen details of our nginx caching proxy settings for evergreen enriching catalogue pages in evergreen with wikidata an openly licensed javascript widget that enriches library catalogues with wikidata data wikidata, canada , and music festival data at caml , stacy allison-cassin and i presented our arguments in favour of using wikidata is a good fit for communities who want to increase the visibility of canadian music in wikimedia foundation projects. wikidata workshop for librarians interested in learning about wikidata? i delivered a workshop for librarians and archivists at the caml preconference. perhaps you will find the materials i developed useful for your own training purposes. truly progressive webvr apps are available offline! i've been dabbling with the a-frame framework for creating webvr experiences for the past couple of months, ever since patrick trottier gave a lightning talk at the gdg sudbury devfest in november and a hands-on session with aframe in january. the &# ;aframevr twitter feed regularly highlights cool new webvr apps … schema.org, wikidata, knowledge graph: strands of the modern semantic web my slides from ohio devfest : schema.org, wikidata, knowledge graph: strands of the modern semantic web and the video, recorded and edited by the incredible amazing patrick hammond: in november, i had the opportunity to speak at ohio devfest . one of the organizers, casey borders, had invited me … google scholar's broken recaptcha hurts libraries and their users update - - : the brilliant folk at unc figured out how to fix google scholar using a pre-scoped search so that, if a search is launched from the library web site, it will automatically associate that search with the library's licensed resources. no ezproxy required! for libraries, proxying user requests is … php's file_marc gets a new release ( . . ) yesterday, just one day before the anniversary of the . . release, i published the . . release of the pear file_marc library. the only change is the addition of a convenience method for fields called getcontents() that simply concatenates all of the subfields together in order, with … php's file_marc gets a new release ( . . ) yesterday, just one day before the anniversary of the . . release, i published the . . release of the pear file_marc library. the only change is the addition of a convenience method for fields called getcontents() that simply concatenates all of the subfields together in order, with … chromebooks and privacy: not always at odds on friday, june th i gave a short talk at the olita digital odyssey conference, which had a theme this year of privacy and security. my talk addressed the evolution of our public and loaner laptops over the past decade, from bare windows xp, to linux, windows xp with … chromebooks and privacy: not always at odds on friday, june th i gave a short talk at the olita digital odyssey conference, which had a theme this year of privacy and security. my talk addressed the evolution of our public and loaner laptops over the past decade, from bare windows xp, to linux, windows xp with … library stories: vision: "professional research tools" for a recent strategic retreat, i was asked to prepare (as homework) a story about a subject that i'm passionate about, with an idea of where we might see the library in the next three to five years. here's one of the stories i came up with, in the form … library stories: vision: "professional research tools" for a recent strategic retreat, i was asked to prepare (as homework) a story about a subject that i'm passionate about, with an idea of where we might see the library in the next three to five years. here's one of the stories i came up with, in the form … querying evergreen from google sheets with custom functions via apps script our staff were recently asked to check thousands of isbns to find out if we already have the corresponding books in our catalogue. they in turn asked me if i could run a script that would check it for them. it makes me happy to work with people who believe … querying evergreen from google sheets with custom functions via apps script our staff were recently asked to check thousands of isbns to find out if we already have the corresponding books in our catalogue. they in turn asked me if i could run a script that would check it for them. it makes me happy to work with people who believe … that survey about ezproxy oclc recently asked ezproxy clients to fill out a survey about their experiences with the product and to get feedback on possible future plans for the product. about half-way through, i decided it might be a good idea to post my responses. because hey, if i'm working to help them … that survey about ezproxy oclc recently asked ezproxy clients to fill out a survey about their experiences with the product and to get feedback on possible future plans for the product. about half-way through, i decided it might be a good idea to post my responses. because hey, if i'm working to help them … "the librarian" - an instruction session in the style of "the martian" i had fun today. a colleague in computer science has been giving his c++ students an assignment to track down an article that is only available in print in the library. when we chatted about it earlier this year, i suggested that perhaps he could bring me in as a … "the librarian" - an instruction session in the style of "the martian" i had fun today. a colleague in computer science has been giving his c++ students an assignment to track down an article that is only available in print in the library. when we chatted about it earlier this year, i suggested that perhaps he could bring me in as a … we screwed up: identities in loosely-coupled systems a few weeks ago, i came to the startling and depressing realization that we had screwed up. it started when someone i know and greatly respect ran into me in the library and said "we have a problem". i'm the recently appointed chair of our library and archives department, so … we screwed up: identities in loosely-coupled systems a few weeks ago, i came to the startling and depressing realization that we had screwed up. it started when someone i know and greatly respect ran into me in the library and said "we have a problem". i'm the recently appointed chair of our library and archives department, so … research across the curriculum the following post dates back to january , , when i had been employed at laurentian for less than a year and was getting an institutional repository up and running.... i think old me had some interesting thoughts! abstract the author advocates an approach to university curriculum that re-emphasizes the … research across the curriculum the following post dates back to january , , when i had been employed at laurentian for less than a year and was getting an institutional repository up and running.... i think old me had some interesting thoughts! abstract the author advocates an approach to university curriculum that re-emphasizes the … library and archives canada: planning for a new union catalogue update - - : clarified (in the privacy section) that only nrcan runs evergreen. i attended a meeting with library and archives canada today in my role as an ontario library association board member to discuss the plans around a new canadian union catalogue based on oclc's hosted services. following are some … library and archives canada: planning for a new union catalogue update - - : clarified (in the privacy section) that only nrcan runs evergreen. i attended a meeting with library and archives canada today in my role as an ontario library association board member to discuss the plans around a new canadian union catalogue based on oclc's hosted services. following are some … library catalogues and http status codes i noticed in google's webmaster tools that our catalogue had been returning some soft s. curious, i checked into some of the uris suffering from this condition, and realized that evergreen returns an http status code of ok when it serves up a record details page for a record … library catalogues and http status codes i noticed in google's webmaster tools that our catalogue had been returning some soft s. curious, i checked into some of the uris suffering from this condition, and realized that evergreen returns an http status code of ok when it serves up a record details page for a record … dear database vendor: defending against sci-hub.org scraping is going to be very difficult our library receives formal communications from various content/database vendors about "serious intellectual property infringement" on a reasonably regular basis, that urge us to "pay particular attention to proxy security". here is part of the response i sent to the most recent such request: we use the usagelimit directives that … dear database vendor: defending against sci-hub.org scraping is going to be very difficult our library receives formal communications from various content/database vendors about "serious intellectual property infringement" on a reasonably regular basis, that urge us to "pay particular attention to proxy security". here is part of the response i sent to the most recent such request: we use the usagelimit directives that … putting the "web" back into semantic web in libraries i was honoured to lead a workshop and speak at this year's edition of semantic web in bibliotheken (swib) in bonn, germany. it was an amazing experience; there were so many rich projects being described with obvious dividends for the users of libraries, once again the european library community fills … putting the "web" back into semantic web in libraries i was honoured to lead a workshop and speak at this year's edition of semantic web in bibliotheken (swib) in bonn, germany. it was an amazing experience; there were so many rich projects being described with obvious dividends for the users of libraries, once again the european library community fills … social networking for researchers: researchgate and their ilk the centre for research in occupational safety and health asked me to give a lunch'n'learn presentation on researchgate today, which was a challenge i was happy to take on... but i took the liberty of stretching the scope of the discussion to focus on social networking in the context of … social networking for researchers: researchgate and their ilk the centre for research in occupational safety and health asked me to give a lunch'n'learn presentation on researchgate today, which was a challenge i was happy to take on... but i took the liberty of stretching the scope of the discussion to focus on social networking in the context of … how discovery layers have closed off access to library resources, and other tales of schema.org from lita forum at the lita forum yesterday, i accused (presentation) most discovery layers of not solving the discoverability problems of libraries, but instead exacerbating them by launching us headlong to a closed, unlinkable world. coincidentally, lorcan dempsey's opening keynote contained a subtle criticism of discovery layers. i wasn't that subtle. here's why … how discovery layers have closed off access to library resources, and other tales of schema.org from lita forum at the lita forum yesterday, i accused (presentation) most discovery layers of not solving the discoverability problems of libraries, but instead exacerbating them by launching us headlong to a closed, unlinkable world. coincidentally, lorcan dempsey's opening keynote contained a subtle criticism of discovery layers. i wasn't that subtle. here's why … dcmi : schema.org holdings in open source library systems my slides from dcmi : schema.org in the wild: open source libraries++. last week i was at the dublin core metadata initiative conference, where richard wallis, charles maccathie nevile and i were slated to present on schema.org and the work of the w c schema.org bibliographic extension … my small contribution to schema.org this week version . of the http://schema.org vocabulary was released a few days ago, and i once again had a small part to play in it. with the addition of the workexample and exampleofwork properties, we (richard wallis, dan brickley, and i) realized that examples of these creativework example … my small contribution to schema.org this week version . of the http://schema.org vocabulary was released a few days ago, and i once again had a small part to play in it. with the addition of the workexample and exampleofwork properties, we (richard wallis, dan brickley, and i) realized that examples of these creativework example … posting on the laurentian university library blog since returning from my sabbatical, i've felt pretty strongly that one of the things our work place is lacking is open communication about the work that we do--not just outside of the library, but within the library as well. i'm convinced that the more that we know about the demands … posting on the laurentian university library blog since returning from my sabbatical, i've felt pretty strongly that one of the things our work place is lacking is open communication about the work that we do--not just outside of the library, but within the library as well. i'm convinced that the more that we know about the demands … cataloguing for the open web: schema.org in library catalogues and websites tldr; my slides are href="http://stuff.coffeecode.net/ /understanding_schema">here, and the slides from jenn and jason are also available from href="http://connect.ala.org/node/ ">ala connect. on sunday, june th jenn riley, jason clark, and i presented at the alcts/lita jointly sponsored session … cataloguing for the open web: schema.org in library catalogues and websites tldr; my slides are href="http://stuff.coffeecode.net/ /understanding_schema">here, and the slides from jenn and jason are also available from href="http://connect.ala.org/node/ ">ala connect. on sunday, june th jenn riley, jason clark, and i presented at the alcts/lita jointly sponsored session … linked data interest panel, part good talk by richard wallis this morning at the ala annual conference on publishing entities on the web. many of his points map extremely closely to what i've been saying and will be saying tomorrow during my own session (albeit with ten fewer minutes). i was particularly heartened to hear … linked data interest panel, part good talk by richard wallis this morning at the ala annual conference on publishing entities on the web. many of his points map extremely closely to what i've been saying and will be saying tomorrow during my own session (albeit with ten fewer minutes). i was particularly heartened to hear … rdfa introduction and codelabs for libraries my rdfa introduction and codelab materials for the ala preconference on practical linked data with open source are now online! and now i've finished leading the rdfa + schema.org codelab that i've been stressing over and refining for about a month at the american library association annual conference practical … rdfa introduction and codelabs for libraries my rdfa introduction and codelab materials for the ala preconference on practical linked data with open source are now online! and now i've finished leading the rdfa + schema.org codelab that i've been stressing over and refining for about a month at the american library association annual conference practical … dropping back into the semantic web i've been at the extended (formerly european) semantic web conference ( eswc) in anissaras, greece for four days now. my reason for attending was to present my paper seeding structured data by default in open source library systems (presentation) (paper). it has been fantastic. as a librarian attending a conference … dropping back into the semantic web i've been at the extended (formerly european) semantic web conference ( eswc) in anissaras, greece for four days now. my reason for attending was to present my paper seeding structured data by default in open source library systems (presentation) (paper). it has been fantastic. as a librarian attending a conference … rdfa, schema.org, and open source library systems two things of note: i recently submitted the camera-ready copy for my eswc paper, seeding structured data by default via open source library systems (**preprint**). the paper focuses on the work i've done with evergreen, koha, and vufind to use emerging web standards such as rdfa lite and schema … rdfa, schema.org, and open source library systems two things of note: i recently submitted the camera-ready copy for my eswc paper, seeding structured data by default via open source library systems (**preprint**). the paper focuses on the work i've done with evergreen, koha, and vufind to use emerging web standards such as rdfa lite and schema … mapping library holdings to the product / offer mode in schema.org back in august, i mentioned that i taught evergreen, koha, and vufind how to express library holdings in schema.org via the http://schema.org/offer class. what i failed to mention was how others can do the same with their own library systems (well, okay, i linked to the … mapping library holdings to the product / offer mode in schema.org back in august, i mentioned that i taught evergreen, koha, and vufind how to express library holdings in schema.org via the http://schema.org/offer class. what i failed to mention was how others can do the same with their own library systems (well, okay, i linked to the … what would you understand if you read the entire world wide web? on tuesday, february th, i'll be participating in laurentian university's research week lightning talks. unlike most five-minute lightning talk events in which i've participated, the time limit for each talk tomorrow will be one minute. imagine different researchers getting up to summarize their research in one minute each, and … what would you understand if you read the entire world wide web? on tuesday, february th, i'll be participating in laurentian university's research week lightning talks. unlike most five-minute lightning talk events in which i've participated, the time limit for each talk tomorrow will be one minute. imagine different researchers getting up to summarize their research in one minute each, and … ups and downs tuesday was not the greatest day, but at least each setback resulted in a triumph... first, the periodical proposal for schema.org--that i have poured a good couple of months of effort into--took a step closer to reality when dan brickley announced on the public-vocabs list that he had … ups and downs tuesday was not the greatest day, but at least each setback resulted in a triumph... first, the periodical proposal for schema.org--that i have poured a good couple of months of effort into--took a step closer to reality when dan brickley announced on the public-vocabs list that he had … broadening support for linked data in marc the following is an email that i sent to the marc mailing list on january , that might be of interest to those looking to provide better support for linked data in marc (hopefully as just a transitional step): in the spirit of making it possible to express linked … broadening support for linked data in marc the following is an email that i sent to the marc mailing list on january , that might be of interest to those looking to provide better support for linked data in marc (hopefully as just a transitional step): in the spirit of making it possible to express linked … want citations? release your work! last week i was putting the finishing touches on the first serious academic paper i have written in a long time, and decided that i wanted to provide backup for some of the assertions i had made. naturally, the deadline was tight, so getting any articles via interlibrary loan was … want citations? release your work! last week i was putting the finishing touches on the first serious academic paper i have written in a long time, and decided that i wanted to provide backup for some of the assertions i had made. naturally, the deadline was tight, so getting any articles via interlibrary loan was … file_marc: . . release fixes data corruption bug i released file_marc . . yesterday after receiving a bug report from the most excellent mark jordan about a basic (but data corrupting) problem that had existed since the very early days (almost seven years ago). if you generate marc binary output from file_marc, you should upgrade immediately. in … file_marc: . . release fixes data corruption bug i released file_marc . . yesterday after receiving a bug report from the most excellent mark jordan about a basic (but data corrupting) problem that had existed since the very early days (almost seven years ago). if you generate marc binary output from file_marc, you should upgrade immediately. in … talk proposal: structuring library data on the web with schema.org: we're on it! i submitted the following proposal to the library technology conference and thought it might be of general interest. structuring library data on the web with schema.org: we're on it! abstract until recently, there has been a disappointing level of adoption of schema.org structured data in traditional core … talk proposal: structuring library data on the web with schema.org: we're on it! i submitted the following proposal to the library technology conference and thought it might be of general interest. structuring library data on the web with schema.org: we're on it! abstract until recently, there has been a disappointing level of adoption of schema.org structured data in traditional core … file_marc makes it to stable . . release (finally!) way back in , i thought "it's a shame there is no php library for parsing marc records!", and given that much of my most recent coding experience was in the php realm, i thought it would be a good way of contributing to the world of code lib. thus file_marc … file_marc makes it to stable . . release (finally!) way back in , i thought "it's a shame there is no php library for parsing marc records!", and given that much of my most recent coding experience was in the php realm, i thought it would be a good way of contributing to the world of code lib. thus file_marc … finally tangoed with reveal.js to create presentations ... and i have enjoyed the dance. yes, i know i'm way behind the times. over the past few years i was generating presentations via asciidoc, and i enjoyed its very functional approach and basic output. however, recently i used google drive to quickly create a few slightly prettier but much … finally tangoed with reveal.js to create presentations ... and i have enjoyed the dance. yes, i know i'm way behind the times. over the past few years i was generating presentations via asciidoc, and i enjoyed its very functional approach and basic output. however, recently i used google drive to quickly create a few slightly prettier but much … rdfa and schema.org all the library things tldr: the evergreen and koha integrated library systems now express their record details in the schema.org vocabulary out of the box using rdfa. individual holdings are expressed as offer instances per the w c schema bib extension community group proposal to parallel commercial sales offers. and i have published a … rdfa and schema.org all the library things tldr: the evergreen and koha integrated library systems now express their record details in the schema.org vocabulary out of the box using rdfa. individual holdings are expressed as offer instances per the w c schema bib extension community group proposal to parallel commercial sales offers. and i have published a … a flask of full-text search in postgresql update: more conventional versions of the slides are available from google docs or in on speakerdeck (pdf) . on august , , i gave the following talk at the pycon canada conference: i’m a systems librarian at laurentian university. for the past six years, my day job and research … a flask of full-text search in postgresql update: more conventional versions of the slides are available from google docs or in on speakerdeck (pdf) . on august , , i gave the following talk at the pycon canada conference: i’m a systems librarian at laurentian university. for the past six years, my day job and research … parsing the schema.org vocabulary for fun and frustration for various reasons i've spent a few hours today trying to parse the schema.org vocabulary into a nice, searchable database structure. unfortunately, for a linked data effort that's two years old now and arguably one of the most important efforts out there, it's been an exercise in frustration. owl … parsing the schema.org vocabulary for fun and frustration for various reasons i've spent a few hours today trying to parse the schema.org vocabulary into a nice, searchable database structure. unfortunately, for a linked data effort that's two years old now and arguably one of the most important efforts out there, it's been an exercise in frustration. owl … linked data irony, example one of probably many i'm currently ramping up my knowledge of the linked dataworld, and ran across the proceedings of the www workshop on linked data on the web. which are published on the web (yay!) as open access (yay!) in pdf (what?). thus, the papers from the linked data workshop at the w … linked data irony, example one of probably many i'm currently ramping up my knowledge of the linked dataworld, and ran across the proceedings of the www workshop on linked data on the web. which are published on the web (yay!) as open access (yay!) in pdf (what?). thus, the papers from the linked data workshop at the w … pycon canada - postgresql full-text search and flask on august , , i'll be giving a twenty-minute talk at pycon canada on a flask of full-text search with postgresql. i'm very excited to be talking about python, at a python conference, and to be giving the python audience a peek at postgresql's full-text search capabilities. with a twenty … pycon canada - postgresql full-text search and flask on august , , i'll be giving a twenty-minute talk at pycon canada on a flask of full-text search with postgresql. i'm very excited to be talking about python, at a python conference, and to be giving the python audience a peek at postgresql's full-text search capabilities. with a twenty … carlcore metadata application profile for institutional repositories a long time ago, in what seemed like another life, i attended the access conference as a relatively new systems librarian at laurentian university. the subject of the preconference was this totally new-to-me thing called "institutional repositories", which i eventually worked out were basically web applications oriented towards content … carlcore metadata application profile for institutional repositories a long time ago, in what seemed like another life, i attended the access conference as a relatively new systems librarian at laurentian university. the subject of the preconference was this totally new-to-me thing called "institutional repositories", which i eventually worked out were basically web applications oriented towards content … making the evergreen catalogue mobile-friendly via responsive css back in november the evergreen community was discussing the desire for a mobile catalogue, and expressed a strong opinion that the right way forward would be to teach the current catalogue to be mobile-friendly by applying principles of responsive design. in fact, i stated: almost all of this can be … making the evergreen catalogue mobile-friendly via responsive css back in november the evergreen community was discussing the desire for a mobile catalogue, and expressed a strong opinion that the right way forward would be to teach the current catalogue to be mobile-friendly by applying principles of responsive design. in fact, i stated: almost all of this can be … structured data: making metadata matter for machines update - - : now with video of the presentation, thanks to the awesome #egcon volunteers! i've been attending the evergreen conference in beautiful vancouver. this morning, i was honoured to be able to give a presentation on some of the work i've been doing on implementing linked data via schema … structured data: making metadata matter for machines update - - : now with video of the presentation, thanks to the awesome #egcon volunteers! i've been attending the evergreen conference in beautiful vancouver. this morning, i was honoured to be able to give a presentation on some of the work i've been doing on implementing linked data via schema … introducing version control & git in . hours to undergraduates our university offers a computer science degree, but the formal curriculum does not cover version control (or a number of other common tools and practices in software development). students that have worked for me in part-time jobs or summer positions have said things like: if it wasn't for that one … introducing version control & git in . hours to undergraduates our university offers a computer science degree, but the formal curriculum does not cover version control (or a number of other common tools and practices in software development). students that have worked for me in part-time jobs or summer positions have said things like: if it wasn't for that one … triumph of the tiny brain: dan vs. drupal / panels a while ago i inherited responsibility for a drupal instance and a rather out-of-date server. (you know it's not good when your production operating system is so old that it is no longer getting security updates). i'm not a drupal person. i dabbled with drupal years and years ago … triumph of the tiny brain: dan vs. drupal / panels a while ago i inherited responsibility for a drupal instance and a rather out-of-date server. (you know it's not good when your production operating system is so old that it is no longer getting security updates). i'm not a drupal person. i dabbled with drupal years and years ago … finding drm-free books on the google play store john mark ockerbloom recently said, while trying to buy a drm-free copy of john scalzi's redshirts on the google play store: “the catalog page doesn’t tell me what format it’s in, or whether it has drm; it instead just asks me to sign in to buy it.” i … finding drm-free books on the google play store john mark ockerbloom recently said, while trying to buy a drm-free copy of john scalzi's redshirts on the google play store: “the catalog page doesn’t tell me what format it’s in, or whether it has drm; it instead just asks me to sign in to buy it.” i … first go program: converting google scholar xml holdings to ebsco discovery service holdings update - - : and here's how to implement stream-oriented xml parsing many academic libraries are already generating electronic resource holdings summaries in the google scholar xml holdingsformat, and it seems to provide most of the metadata you would need to provide a discovery layer summary in a nice, granular format … first go program: converting google scholar xml holdings to ebsco discovery service holdings update - - : and here's how to implement stream-oriented xml parsing many academic libraries are already generating electronic resource holdings summaries in the google scholar xml holdingsformat, and it seems to provide most of the metadata you would need to provide a discovery layer summary in a nice, granular format … what does a system librarian do? preface: i'm talking to my daughter's kindergarten class tomorrow about my job. exciting! so i prepped a little bit; it will probably go entirely different, but here's how it's going to go in my mind... my name is dan scott. i’m amber’s dad. i’m a systems librarian … farewell, old google books apis since the announcement of the new v google books api, i've been doing a bit of work with it in python (following up on my part of the conversation). today, google announced that many of their older apis were now officially deprecated. included in that list are the google books … the new google books api and possibilities for libraries on the subject of the new google books api that was unveiled during the google io conference last week, jonathan rochkind states: once you have an api key, it can keep track of # requests for that key — it’s not clear to me if they rate limit you, and … creating a marc record from scratch in php using file_marc in the past couple of days, two people have written me email essentially saying: "dan, this file_marc library sounds great - but i can't figure out how to create a record from scratch with it! can you please help me?" yes, when you're dealing with marc, you'll quickly get all weepy … access conference in beautiful british columbia the official announcement for the canadian library association (cla) emerging technology interest group (etig)-sponsored access conference for went out back in november, announcing vancouver, british columbia, as the host. note that the schedule has changed from its original dates to october - ! i've told a number of people … troubleshooting ariel send and receive functionality i'm posting the following instructions for testing the ports required by ariel interlibrary loan software. i get requests for this information a few times a year, and at some point it will be easier to find on my blog than to dig through my email archives from over years … chilifresh-using libraries: are you violating copyright? when i was preparing my access presentation about social sharing and aggregation in library software, i came across chilifresh, a company that aggregates reviews written by library patrons from across libraries that subscribe to the company's review service. i was a bit disappointed to see that the service almost … on avoiding accusations of forking a project sometimes forking a project is necessary to reassert community control over a project that has become overly dominated by a single corporate rules: see openindiana and libreoffice for recent examples. and in the world of distributed version control systems, forking is viewed positively; it's a form of evolution, where experimental … library hackers want you to throw down the gauntlet on october th, a very special event is happening: the access hackfest. a tradition since access , the hackfest brings together library practitioners of all kinds to tackle challenges and problems from the mundane to the sublime to the ridiculous. if you can imagine a spectrum with three axes, you … file_marc . . - now offering two tasty flavours of marc-as-json output i've just released the php pear library file_marc . . . this release brings two json serialization output methods for marc to the table: tojsonhash() returns json that adheres to bill dueber's proposal for the array-oriented marc-hash json format at new interest in marc-hash json tojson() returns json that adheres … in which i perceive that gossip is not science marshall breeding published the results of his international survey of library automation a few days ago. juicy stuff, with averages, medians, and modes for the negative/positive responses on a variety of ils and vendor-related questions, and some written comments from the respondents. one would expect the library geek … pkg_check_modules syntax error near unexpected token 'deps,' the next time you bash your brains against autotools for a while wondering why your perfectly good pkg_check_modules() macro, as cut and paste directly from the recommended configure.ac entry for the package you're trying to integrate (in this case libmemcached), and you get the error message pkg_check_modules syntax error … marc library for c# coders c# isn't in my go-to list of programming languages, but i can understand why others would be interested in developing applications in c#. so it's good news to the c# community of library developers (it would be interesting to find out how many of you are out there) that there … doing useful things with the txt dump of sfx holdings, part : database there must be other people who have much more intelligent things than me with the txt dump of sfx holdings that you can generate via the web administration interface, but as i've gone through this process at least twice and rediscovered it each time, perhaps i'll save myself an hour … transparent acquisitions budgets and expenditures for academic libraries in my most recent post over at the academic matters site, after a general discussion about "new books lists" in academic libraries, i tackle one of the dirty laundry areas for academic libraries: exposing how collection development funds are allocated to departments. here's a relevant quote: for - , we decided … making skype work in a windows xp virtualbox guest instance if you, like me, install skype in a windows xp virtualbox guest instance running on an ubuntu host on a thinkpad t with an intel dual-core -bit processor, it might throw windows exceptions and generate error reports as reported in virtualbox ticket # . if you then go into your … in which my words also appear elsewhere i'm excited to announce the availability of my first post as an invited contributor to the more than bookends blog over at the revamped academic matters web site. my fellow contributors are anne fullerton and amy greenberg, and i'm delighted to be included with them in our appointed task of … presentation: libx and zotero direct link to the instructional presentation on libx and zotero at laurentian university (odt) (pdf) i had the pleasure of giving an instructional session to a class of graduate students on monday, november th. the topic i had been asked to present was an extended version of the artificially enhanced … archive of oclc worldcat policy as posted - - i noticed last night (sunday, november nd, ) that the new and much-anticipated / feared oclc worldcat policy had been posted. as far as the clarified terms went, i was willing to give them the benefit of the doubt until they were actually posted. i was first alerted to the freshly … dear dan: why is using flash for navigation a bad idea? i received the following email late last week, and took the time to reply to it tonight. i had originally been asked by a friend to help diagnose why his organization's site navigation wasn't working in some of his browsers. i noticed that the navigation bar was implemented in flash … boss me around, s'il vous plait my place of work, laurentian university, is looking for a new director of the j.n. desmarais library. the call for applications closes october th. i think our library has done some impressive work (participating in the food security project for the democratic republic of congo, building the mining environment … software freedom day - sudbury i opted to do something out of the unusual (for me) this year when i learned about software freedom day; i signed up to organize an event in sudbury. given everything that was already on my plate, it was pure foolishness to do so - but it was also important to … in which digital manifestations of myself plague the internets over the past few months, i've been fortunate enough to participate in a few events that have been recorded and made available on the 'net for your perpetual amusement. well - amusing if you're a special sort of person. following are the three latest such adventures, in chronological order: couchdb: delicious … test server strategies occasionally on the #openils-evergreen irc channel, a question comes up what kind of hardware a site should buy if they're getting serious about trying out evergreen. i had exactly the same chat with mike rylander back in december, so i thought it might be useful to share the strategy we … inspiring confidence that my problem will be solved hmm. i think i'm in trouble if the support site itself is incapable of displaying accented characters properly. corrupted characters in a problem report about corrupted characters. oh dear. my analysis of the problem is that the content in the middle is contained within a frame, and is actually encoded … couchdb: delicious sacrilege well, the talk about couchdb (an open-source document database similar in concept to lotus notes, but with a restful api and json as an interchange format) wasn't as much of a train wreck as it could have been. i learned a lot putting it together, and had some fun with … oooh... looks like i've got (even more) work cut out for me php is getting a native doubly-linked list structure. this is fabulous news; when i wrote the file_marc pear package, i ended up having to implement a linked list class in pear to support it. file_marc does its job today (even though i haven't taken it out of alpha yet), but … geek triumph what a night. i upgraded serendipity, dokuwiki, drupal, involving four different servers and three different linux distros, and shifted one application from one server to another (with seamless redirects from the old server to the new) with close to no downtime. i think this is the first time i've completed … a chance to work at laurentian university library hey folks, if you're interested in working at laurentian university, we've got a couple of tenure-track positions looking for qualified people who can stand the thought of working with me... (nothing like narrowing the field dramatically, ah well). the following position descriptions are straight out of the employment vacancies page … ariel: go back to your room, now! i've been working on automating the delivery of electronic documents to our patrons; most of the work over the summer was spent in ensuring that we had our legal and policy bases covered. i read through the documentation for ariel, our chosen ill software, to ensure that everything we wanted … "a canonical example of a next-generation opac?" ooh, yes, i remember writing that now. not about evergreen, which has book bags and format limiters and facets and whiz-bangy unapi goodness whose potential for causing mayhem has barely been scratched - but about fac-back-opac, the django-and-solr-and-jython beast that mike beccaria and i picked up from casey durfee's scraps pile … the pain: discovery layer selection i returned from a week of vacation to land solidly in the middle of a discovery layer selection process -- not for our library, yet, but from a consortial perspective clearly having some impact on possible decisions for us further on down the road. as the systems librarian, i was nominated … access draft program is online! i had been getting anxious about the lack of news on the access conference front, but just saw in my trusty rss feed that the draft program schedule is now available. i'm already looking forward to jessamyn west's opening keynote and roy tennant's closing keynote. they always bring … evergreen vmware image available for download after much iteration and minor bug-squashing in my configuration, i am pleased to announce the evergreen on gentoo vmware image is available for download. the download itself is approximately mb as a zipped image; when you unzip the image, it will require approximately gb of disk space. ( ) basic instructions … in which i make one apology, and two lengthy explanations i recently insulted richard wallis and rob styles of talis by stating on dan chudnov's blog: to me it felt like talis was in full sales mode during both richard's api talk and rob's lightning talk i must apologize for using the terms "sales mode" and "sales pitch" to describe … facbackopac: making casey durfee's code talk to unicorn for the past couple of days, i've been playing with casey durfee's code that uses solr and django to offer a faceted catalogue. my challenge? turn a dense set of code focused on dewey and horizon ils into a catalogue that speaks lc and unicorn. additionally, i want it to … lightning talk: file_marc for php i gave a lightning talk at the code lib conference today on “file_marc for php” introducing the file_marc library to anybody who hasn't already heard about it. i crammed nine slides of information into five minutes, which was hopefully enough to convince people to start using it and provide feedback on … google summer of code lib? google just announced that they will start accepting applications in march for the google summer of code (gsoc) . in , over organizations participated in the gsoc, and google expects to have a similar number participating in . there are no lack of potential open-source development projects in the … long time, no wild conjecture so here's the first of two posts based on purely wild conjecture. in a lengthening chain of trackbacks, ryan eby mentioned christina's observation that springlink has started displaying google ads, presumably to supplement their subscription and pay-per-article income. ryan goes on to wonder: will vendors continue with the subscription model … a short-term sirsidynix prediction the second of tonight's wild conjecture-based predictions. one of the things that i was thinking about as i was shovelling the snow off our driveway on monday (other than yes! finally some snow... one of these days amber is going to go rolling around in it) was the position that … reflections at the start of was a year full of change - wonderful, exhausting change. here's a month-by-month summary of the highlights of : january i did a whole lot of work on the pecl ibm_db extension, reviewed a good book on xml and php, and finally fixed up my blog a little bit. i've … oh, vista has _acquired_ sirsidynix... a little over a week ago, i made the following prediction following the extremely under-the-radar press release on december nd that vista equity partners was investing in sirsidynix: i'll go out on a limb and say that a merger or acquisition of sirsidynix in is unlikely ( % confidence), but … musing about sirsidynix's new investment partner sirsi corporation merged with dynix corporation in june . now sirsidynix has announced that vista equity partners is investing in their company. let's take a look at vista's investment philosophy: *we invest in companies that uniquely leverage technology to deliver best-of-class products or services.* i wonder if vista confused "most … save your forehead from flattening prematurely i gave up on trying to get ubuntu . (edgy eft) to run ejabberd today; it looks like there are some fundamental issues going on between the version of erlang and the version of ejabberd that get bundled together. that was a fairly serious setback to my "evergreen on … bibliocommons wireframe walk-through after the future of the ils symposium wrapped up, beth jefferson walked some of us through the current state of the bibliocommons mocked-up web ui for public library catalogs; the project grew out of a youth literacy project designed to encourage kids to read through the same sort of social … future of the ils symposium: building our community and a business case i headed down to windsor early on tuesday morning for the future of the ils symposium hosted by the leddy library at the university of windsor. it was a good thing i decided to take the hours of bus + train approach to getting there, as sudbury's airport was completely … neat-o: archimède uses apache derby a while back i mentioned on the dspace-devel mailing list that i was interested in adapting dspace to use embedded apache derby as the default database, rather than postgresql, as a means of lowering the installation and configuration barriers involved with setting up access to an external database. i haven't … pear file_marc . . alpha officially released just a short note to let y'all know that i received the thumbs-up from my fellow pear developers to add file_marc as an official pear package. what does this mean? well, assuming you have php . + and pear installed, you can now download and install file_marc and its prerequisite … belated access notes: saturday, oct. th final entry in publishing my own hastily jotted access conference notes--primarily for my own purposes, but maybe it will help you indirectly find some real content relating to your field of interest at the official podcast/presentation web site for access . contents include: consortial updates from asin, quebec … getting the goods: libraries and the last mile in my continuing series of publishing my access notes, roy tennant's keynote on finishing the task of connecting our users to the information they need is something to which every librarian should pay attention. if you don't understand something i've written, there's always the podcast of roy's talk. in … access notes: october my continuing summaries from access . thursday, october th was the first "normal" day of the conference featuring the following presentations: open access, open source, content deals: who pays? (leslie weir) our ontario: yours to recover (art rhyno, walter lewis) improving the catalogue interface using endeca (tito sierra) lightning talks … library geeks in human form so, i think i read somewhere on #code lib that dan chudnov, the most excellent host of the library geeks podcast, refused to make human-readable links to the mp files for the podcasts available in plain old html because he had bought into the stodgy old definition of podcasts (hah! "stodgy … double-barreled php releases i'm the proud parent of two new releases over the past couple of days: one official pear release for linked list fans, and another revision of the file_marc proposal for library geeks. structures_linkedlist a few days ago marked the first official pear release of the structures_linkedlist. yes, it's only at … feeling sorry for our vendor so i'm here in rainy alabama (the weather must have followed me from ottawa) taking a training course from our ils vendor. i'm getting some disturbing insights into the company that are turning my general state of disbelief at the state of the system that we're paying lots of money … backlog of access notes following on my plea for access to access presentations, i'm in the process of posting the notes i took at the carl instutitional repository pre-conference and access . i probably should have posted these to a wiki so that others (like the presenters) could go ahead and make corrections/additions … calling for access to all future access presentations it's a bit late now, but as the guy in the corner with the clicky keyboard desperately trying to take notes during the presentations (when not stifling giggles and snorts from #code lib), i would be a lot more relaxed if i was certain that the presentations were going to be … secretssss of free wifi at access the bulk of the access conference is being held at a hotel-that-shall-not-be-named-for-reasons-that-will-become-apparent-shortly in ottawa this week. i was at the carl pre-conference on institutional repositories today and a kind man (wayne johnston from the university of guelph) tipped me off that the hotel's pay-for-wifi system is a little bit … laundry list systems librarians on the always excellent techessence, dorothea salo posted hiring a systems librarian. the blog post warned against libraries who put together a “laundry-list job description” for systems librarians: sure, it'd be nice to have someone who can kick-start a printer, put together a desktop machine from scraps, re-architect a website … file_marc and structure_linked_list: new alpha releases earlier in the month i asked for feedback on the super-alpha marc package for php. most of the responses i received were along the lines of "sounds great!" but there hasn't been much in the way of real suggestions for improvement. in the mean time, i've figured out (with lukas … super-alpha marc package for php: comments requested okay, i've been working on this project (let's call it pear_marc, although it's not an official pear project yet) in my spare moments over the past month or two. it's a new php package for working with marc records. the package tries to follow the pear project standards (coding, documentation … tether price today, usdt live marketcap, chart, and info | coinmarketcap cryptos:   , exchanges:   market cap:  $ , , , , h vol:  $ , , , dominance:  btc: . % eth: . %eth gas:   gwei cryptocurrencies exchanges nft portfolio watchlist calendars products learn cryptos:   , exchanges:   market cap:  $ , , , , h vol:  $ , , , dominance:  btc: . % eth: . %eth gas:   gwei cryptocurrenciestokenstether tetherusdt rank # token on , watchlists tether price (usdt) $ . . % . btc . % . eth . % low:$ . high:$ . h   tether usdtprice: $ .   . % add to main watchlist   market cap $ , , , . % fully diluted market cap $ , , , . % volume h $ , , , . % volume / market cap . circulating supply . b usdt max supply -- total supply , , , more stats buyexchangegamingearn crypto sponsored links website, explorers, socials etc. tether.to explorers community chat whitepaper tether links links tether.to whitepaper chat explorers www.omniexplorer.info etherscan.io algoexplorer.io tronscan.org bscscan.com community twitter contracts ethereum xdac ...d ec more ethereum xdac ...d ec tether contracts ethereum xdac ...d ec binance smart chain x d ... trontr nhq...zgjlj t tomochain x b... a e solanabqcdhd...bbbdiq algorand heco xa e... c e a xdai chain x eca...ed c fantom x d... a c a polygon xc ... b e f avalanche contract chain xde a... b okexchain x b... c c bitcoin cash fc d... c kcc x ... baf please change the wallet network change the wallet network in the metamask application to add this contract. i understand audits certik certik tether audits certik tags payments stablecoin + payments stablecoin stablecoin - asset-backed binance smart chain view all tether tags property payments stablecoin stablecoin - asset-backed binance smart chain avalanche ecosystem solana ecosystem overviewmarkethistorical dataholderswalletsnewssocialsratingsanalysisshare tether chart loading data please wait, we are loading chart data usdt price live data the live tether price today is $ . usd with a -hour trading volume of $ , , , usd. tether is down . % in the last hours. the current coinmarketcap ranking is # , with a live market cap of $ , , , usd. it has a circulating supply of , , , usdt coins and the max. supply is not available. if you would like to know where to buy tether, the top exchanges for trading in tether are currently binance, tokocrypto, okex, cointiger, and huobi global. you can find others listed on our crypto exchanges page. what is tether (usdt)? usdt is a stablecoin (stable-value cryptocurrency) that mirrors the price of the u.s. dollar, issued by a hong kong-based company tether. the token’s peg to the usd is achieved via maintaining a sum of commercial paper, fiduciary deposits, cash, reserve repo notes, and treasury bills in reserves that is equal in usd value to the number of usdt in circulation. originally launched in july as realcoin, a second-layer cryptocurrency token built on top of bitcoin’s blockchain through the use of the omni platform, it was later renamed to ustether, and then, finally, to usdt. in addition to bitcoin’s, usdt was later updated to work on the ethereum, eos, tron, algorand, and omg blockchains. the stated purpose of usdt is to combine the unrestricted nature of cryptocurrencies — which can be sent between users without a trusted third-party intermediary — with the stable value of the us dollar. who are the founders of tether? usdt — or as it was known at the time, realcoin — was launched in by brock pierce, reeve collins and craig sellars. brock pierce is a well-known entrepreneur who has co-founded a number of high-profile projects in the crypto and entertainment industries. in , he co-founded a venture capital firm blockchain capital, which by had raised over $ million in funding. in , pierce became the director of the bitcoin foundation, a nonprofit established to help improve and promote bitcoin. pierce has also co-founded block.one, the company behind eos, one of the largest cryptocurrencies on the market. reeve collins was the ceo of tether for the first two years of its existence. prior to that, he had co-founded several successful companies, such as the online ad network traffic marketplace, entertainment studio redlever and gambling website pala interactive. as of , collins is heading smarmedia technologies, a marketing and advertising tech company. other than working on tether, craig sellars has been a member of the omni foundation for over six years. its omni protocol allows users to create and trade smart-contract based properties and currencies on top of bitcoin’s blockchain. sellars has also worked in several other cryptocurrency companies and organizations, such as bitfinex, factom, synereo and the maidsafe foundation. what makes tether unique? usdt’s unique feature is the fact that its value is guaranteed by tether to remain pegged to the u.s. dollar. according to tether, whenever it issues new usdt tokens, it allocates the same amount of usd to its reserves, thus ensuring that usdt is fully backed by cash and cash equivalents. the famously high volatility of the crypto markets means that cryptocurrencies can rise or fall by - % within a single day, making them unreliable as a store of value. usdt, on the other hand, is protected from these fluctuations. this property makes usdt a safe haven for crypto investors: during periods of high volatility, they can park their portfolios in tether without having to completely cash out into usd. in addition, usdt provides a simple way to transact a u.s. dollar equivalent between regions, countries and even continents via blockchain — without having to rely on a slow and expensive intermediary, like a bank or a financial services provider. however, over the years, there have been a number of controversies regarding the validity of tether’s claims about their usd reserves, at times disrupting usdt’s price, which went down as low as $ . at one point in its history. many have raised concerns about the fact that tether’s reserves have never been fully audited by an independent third party. related pages: looking for market and blockchain data for btc? visit our block explorer. want to buy crypto? use coinmarketcap’s guide. how many tether (usdt) coins are there in circulation? there is no hard-coded limit on the total supply of usdt — given the fact that it belongs to a private company, theoretically, its issuance is limited only by tether’s own policies. however, because tether claims that every single usdt is supposed to be backed by one u.s. dollar, the amount of tokens is limited by the company’s actual cash reserves. moreover, tether does not disclose its issuance schedules ahead of time. instead, they provide daily transparency reports, listing the total amount of their asset reserves and liabilities, the latter corresponding to the amount of usdt in circulation. as of september , there are over . billion usdt tokens in circulation, which are backed by $ . billion in assets, according to tether. how is the tether network secured? usdt does not have its own blockchain — instead, it operates as a second-layer token on top of other cryptocurrencies’ blockchains: bitcoin, ethereum, eos, tron, algorand, bitcoin cash and omg, and is secured by their respective hashing algorithms. where can you buy tether (usdt)? it is possible to buy tether / usdt on a large number of cryptocurrency exchanges. in fact, usdt’s average daily trading volume is often on par or even exceeds that of bitcoin. it is especially prominent on those exchanges where fiat-to-crypto trading pairs are unavailable, as it provides a viable alternative to usd. here are some of the most popular exchanges that support tether trading: binance okex hitbtc huobi global our most recent articles about tether: what is yield farming? is tether untouchable? the latest twist in a long-running drama u.s. examining whether tether committed bank fraud, bloomberg says pancake bunny price crashes after defi token targeted in flash loan attack tether releases breakdown of reserves for the first time ever read more usdt tether usd united states dollar usdt price statistics tether price today tether price $ . price change h $- . . % h low / h high $ . / $ . trading volume h $ , , , . . % volume / market cap . market dominance . % market rank # tether market cap market cap $ , , , . . % fully diluted market cap $ , , , . . % tether price yesterday yesterday's low / high $ . / $ . yesterday's open / close $ . / $ . yesterday's change . % yesterday's volume $ , , , . tether price history d low / d high $ . / $ . d low / d high $ . / $ . d low / d high $ . / $ . week low / week high $ . / $ . all time high may , ( years ago) $ . . % all time low jun , ( months ago) $ . . % tether roi . % tether supply circulating supply , , , usdt total supply , , , usdt max supply no data show more trending coins and tokens 🔥 hold bnb on binanceand get % off trading fees. trade nowsponsored pornrocketpornrocket# cumrocketcummies# polyplayplay# minifootballminifootball# minamina# people also watch efforce $ . . % polkadot $ . . % uniswap $ . . % power ledger $ . - . % perl.eco $ . . % mirrored google $ , . . % products blockchain explorer crypto api crypto indices interest jobs board sitemap company about us terms of use privacy policy disclaimer methodology careerswe’re hiring! support request form contact support faq glossary socials facebook twitter telegram instagram interactive chat © coinmarketcap. all rights reserved the co-op cloud docs blog public interest infrastructure. an alternative to corporate clouds built by tech co-ops. learn more benefits collaborative democratic development process, centred on libre software licenses, community governance and a configuration commons. simple quick, flexible, and intuitive with low resource requirements, minimal overhead, and extensive documentation. private control your hosting: use on-premise or virtual servers to suit your needs. encryption as standard. transparent following established open standards, best practices, and builds on existing tools. faq what is co-op cloud? co-op cloud aims to make hosting libre software applications simple for small service providers such as tech co-operatives who are looking to standardise around an open, transparent and scalable infrastructure. it uses the latest container technologies and configurations are shared into the commons for the benefit of all. is this a good long-term choice? co-op cloud re-uses upstream libre software project packaging efforts (containers) so that we can meet projects where they are at and reduce duplication of effort. the project proposes the notion of more direct coordination between distribution methods (app packagers) and production methods (app developers). what libre apps are available? co-op cloud helps deploy and maintain applications that you may already use in your daily life: nextcloud, jitsi, mediawiki, rocket.chat and many more! these are tools that are created by volunteer communities who use libre software licenses in order to build up the public software commons and offer more digital alternatives. what about other alternatives? co-op cloud helps fill a gap between the personal and the industrial-scale: it’s easier to use than kubernetes or ansible, does more to support multi-server, multi-tenant deployments than cloudron, and is much easier than manual deployments. see all the comparisons with other tools. read all frequently asked questions who is involved autonomic is a worker‑owned co‑operative, dedicated to using technology to empower people making a positive difference in the world. we offer service hosting using co‑op cloud and other platforms, custom development, and infrastructure set‑up. visit: autonomic.zone 🏠 this is a community project. get involved source code |documentation |public matrix chat |mastodon |twitter copyleft autonomic cooperative planned maintenance: classify api javascript is currently not supported or is disabled by this browser. some features of this site will not be available. please enable javascript for full functionality. oclc.org oclc.org home research support & training community center webjunction skip to page content. about us contact us get started develop collaborate gallery news events settings menu search home news wms ncip api changes planned maintenance for the classify application, august august oclc will be performing maintenance on the classify user interface on  august at : am edt (utc - ).  maintenance is expected to take approximately minutes.  during this time the classify application and api will be unavailable. please contact oclc customer support with any questions. karen coombs senior product analyst oclc headquarters kilgour place dublin, ohio - us oclc@oclc.org - - - - (usa / canada only) find information for developers librarians partners visit related sites oclc.org oclc research support & training oclc system alerts contact the developer network browse the api explorer request a web service key subscribe to wc-devnet-l get started develop collaborate gallery news events follow the oclc developer network: twitter youtube flickr github © oclc domestic and international trademarks and/or service marks of oclc, inc. and its affiliates this site uses cookies. by continuing to browse the site, you are agreeing to our use of cookies. find out more about oclc's cookie notice. feedback privacy statement accessibility statement iso certificate dshr's blog: yet another dna storage technique dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, july , yet another dna storage technique source an alternative approach to nucleic acid memory by george d. dickinson et al from boise state university describes a fundamentally different way to store and retrieve data using dna strands as the medium. will hughes et al have an accessible summary in dna ‘lite-brite’ is a promising way to archive data for decades or longer: we and our colleagues have developed a way to store data using pegs and pegboards made out of dna and retrieving the data with a microscope – a molecular version of the lite-brite toy. our prototype stores information in patterns using dna strands spaced about nanometers apart. below the fold i look at the details of the technique they call digital nucleic acid memory (dnam). the traditional way to use dna as a storage medium is to encode the data in the sequence of bases in a synthesized strand, then use sequencing to retrieve the data. instead: dnam uses advancements in super-resolution microscopy (srm) to access digital data stored in short oligonucleotide strands that are held together for imaging using dna origami. in dnam, non-volatile information is digitally encoded into specific combinations of single-stranded dna, commonly known as staple strands, that can form dna origami nanostructures when combined with a scaffold strand. when formed into origami, the staple strands are arranged at addressable locations ... that define an indexed matrix of digital information. this site-specific localization of digital information is enabled by designing staple strands with nucleotides that extend from the origami. writing in dnam, writing their character message "data is in our dna!\n" involved encoding it into -bit fountain code droplets then synthesizing two different types of dna sequences: origami: there is one origami for each bits of data to be stored. it forms a x matrix holding a bit index, the bits of droplet data, bits of parity, bits of checksum, and orientation bits. each of the cells thus contains a unique, message-specific dna sequence. staples: there is one staple for each of the x matrix cells, with one end of the strand matching the matrix cell's sequence, and the other indicating a or a by the presence or absence of a sequence that binds to the flourescent dna used for reading. when combined, the staple strands bind to the appropriate cells in the origami, labelling each cell as a or a . reading the key difference between dnam and traditional dna storage techniques is that dnam reads data without sequencing the dna. instead, it uses optical microscopy to identify each "peg" (staple strand) in each matrix cell as either a or a : the patterns of dna strands – the pegs – light up when fluorescently labeled dna bind to them. because the fluorescent strands are short, they rapidly bind and unbind. this causes them to blink, making it easier to separate one peg from another and read the stored information. the difficulty in doing so is that the pegs are on a nanometer grid: because the dna pegs are positioned closer than half the wavelength of visible light, we used super-resolution microscopy, which circumvents the diffraction limit of light. the technique is called "dna-points accumulation for imaging in nanoscale topography (dna-paint)". the process to recover the character message was: , frames from a single field of view were recorded using dna-paint (~ origami identified in  µm ). the super-resolution images of the hybridized imager strands were then reconstructed from blinking events identified in the recording to map the positions of the data domains on each origami ... using a custom localization processing algorithm, the signals were translated to a  ×  grid and converted back to a -bit binary string — which was passed to the decoding algorithm for error correction, droplet recovery, and message reconstruction ... the process enabled successful recovery of the dnam encoded message from a single super-resolution recording. analysis the first thing to note is that whereas traditional dna storage techniques are volumetric, dnam like hard disk or tape is areal. it will therefore be unable to match the extraordinary data density potentially achievable using the traditional approach. dnam claims: after accounting for the bits used by the algorithms, our prototype was able to read data at a density of gigabits per square centimeter. current hard disks have an areal density of . tbit/inch , or about gbit/cm , so for a prototype this is good but not revolutionary, the areal density is set by the nm grid space, so it may not be possible to greatly reduce it. hard disk vendors have demonstrated gbit/cm and have roadmaps to around gbit/cm . dnam's writing process seems more complex than the traditional approach, so is unlikely to be faster or cheaper. the read process is likely to be both faster and cheaper, because dna-paint images a large number of origami in parallel, whereas sequencing is sequential (duh!). but, as i have written, the big barrier to adoption of dna storage is the low bandwidth and high cost of writing the data. posted by david. at : am labels: storage media no comments: post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ▼  july ( ) economics of evil revisited yet another dna storage technique alternatives to proof-of-work a modest proposal about ransomware intel did a boeing graphing china's cryptocurrency crackdown ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: the death of corporate research labs dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, may , the death of corporate research labs in american innovation through the ages, jamie powell wrote: who hasn’t finished a non-fiction book and thought “gee, that could have been half the length and just as informative. if that.” yet every now and then you read something that provokes the exact opposite feeling. where all you can do after reading a tweet, or an article, is type the subject into google and hope there’s more material out there waiting to be read. so it was with alphaville this tuesday afternoon reading a research paper from last year entitled the changing structure of american innovation: some cautionary remarks for economic growth by arora, belenzon, patacconi and suh (h/t to kpmg’s ben southwood, who highlighted it on twitter). the exhaustive work of the duke university and uea academics traces the roots of american academia through the golden age of corporate-driven research, which roughly encompasses the postwar period up to ronald reagan’s presidency, before its steady decline up to the present day. arora et al argue that a cause of the decline in productivity is that: the past three decades have been marked by a growing division of labor between universities focusing on research and large corporations focusing on development. knowledge produced by universities is not often in a form that can be readily digested and turned into new goods and services. small firms and university technology transfer offices cannot fully substitute for corporate research, which had integrated multiple disciplines at the scale required to solve significant technical problems. as someone with many friends who worked at the legendary corporate research labs of the past, including bell labs and xerox parc, and who myself worked at sun microsystems' research lab, this is personal. below the fold i add my c-worth to arora et al's extraordinarily interesting article. the authors provide a must-read, detailed history of the rise and fall of corporate research labs. i lived through their golden age; a year before i was born the transistor was invented at bell labs: the first working device to be built was a point-contact transistor invented in by american physicists john bardeen and walter brattain while working under william shockley at bell labs. they shared the nobel prize in physics for their achievement.[ ] the most widely used transistor is the mosfet (metal–oxide–semiconductor field-effect transistor), also known as the mos transistor, which was invented by egyptian engineer mohamed atalla with korean engineer dawon kahng at bell labs in .[ ][ ][ ] the mosfet was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[ ] arora et al fig . before i was bell labs had been euthanized as part of the general massacre of labs: bell labs had been separated from its parent company at&t and placed under lucent in ; xerox parc had also been spun off into a separate company in . others had been downsized: ibm under louis gerstner re-directed research toward more commercial applications in the mid- s ... a more recent example is dupont’s closing of its central research & development lab in . established in , dupont research rivaled that of top academic chemistry departments. in the s, dupont’s central r&d unit published more articles in the journal of the american chemical society than mit and caltech combined. however, in the s, dupont’s attitude toward research changed and after a gradual decline in scientific publications, the company’s management closed its central research and development lab in . arora et al point out that the rise and fall of the labs coincided with the rise and fall of anti-trust enforcement: historically, many large labs were set up partly because antitrust pressures constrained large firms’ ability to grow through mergers and acquisitions. in the s, if a leading firm wanted to grow, it needed to develop new markets. with growth through mergers and acquisitions constrained by anti-trust pressures, and with little on offer from universities and independent inventors, it often had no choice but to invest in internal r&d. the more relaxed antitrust environment in the s, however, changed this status quo. growth through acquisitions became a more viable alternative to internal research, and hence the need to invest in internal research was reduced. lack of anti-trust enforcement, pervasive short-termism, driven by wall street's focus on quarterly results, and management's focus on manipulating the stock price to maximize the value of their options killed the labs: large corporate labs, however, are unlikely to regain the importance they once enjoyed. research in corporations is difficult to manage profitably. research projects have long horizons and few intermediate milestones that are meaningful to non-experts. as a result, research inside companies can only survive if insulated from the short-term performance requirements of business divisions. however, insulating research from business also has perils. managers, haunted by the spectre of xerox parc and dupont’s “purity hall”, fear creating research organizations disconnected from the main business of the company. walking this tightrope has been extremely difficult. greater product market competition, shorter technology life cycles, and more demanding investors have added to this challenge. companies have increasingly concluded that they can do better by sourcing knowledge from outside, rather than betting on making game-changing discoveries in-house. they describe the successor to the labs as: a new division of innovative labor, with universities focusing on research, large firms focusing on development and commercialization, and spinoffs, startups, and university technology licensing offices responsible for connecting the two. an unintended consequence of abandoning anti-trust enforcement was thus a slowing of productivity growth, because the this new division of labor wasn't as effective as the labs: the translation of scientific knowledge generated in universities to productivity enhancing technical progress has proved to be more difficult to accomplish in practice than expected. spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. large corporations have access to significant resources, can more easily integrate multiple knowledge streams, and direct their research toward solving specific practical problems, which makes it more likely for them to produce commercial applications. university research has tended to be curiosity-driven rather than mission-focused. it has favored insight rather than solutions to specific problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. in sections . . through . . arora et al discuss in detail four reasons why the corporate labs drove faster productivity growth: corporate labs work on general purpose technologies. because the labs were hosted the leading companies in their market, they believed that technologies that benefited their product space would benefit them the most: claude shannon’s work on information theory, for instance, was supported by bell labs because at&t stood to benefit the most from a more efficient communication network ... ibm supported milestones in nanoscience by developing the scanning electron microscope, and furthering investigations into electron localization, non-equilibrium superconductivity, and ballistic electron motions because it saw an opportunity to pre-empt the next revolutionary chip design in its industry ... finally, a recent surge in corporate publications in machine learning suggests that larger firms such as google and facebook that possess complementary assets (user data) for commercialization publish more of their research and software packages to the academic community, as they stand to benefit most from advances in the sector in general. my experience of open source supports this. sun was the leading player in the workstation market and was happy to publish and open source infrastructure technologies such as nfs that would buttress that position. on the desktop it was not a dominant player, which (sadly) led to news being closed-source. corporate labs solve practical problems. they quote andrew odlyzko: “it was very important that bell labs had a connection to the market, and thereby to real problems. the fact that it wasn’t a tight coupling is what enabled people to work on many long-term problems. but the coupling was there, and so the wild goose chases that are at the heart of really innovative research tended to be less wild, more carefully targeted and less subject to the inertia that is characteristic of university research.” again, my experience supports this contention. my work at sun labs was on fault-tolerance. others worked on, for example, ultra high-bandwidth backplane bus technology, innovative cooling materials, optical interconnect, and asynchronous chip architectures, all of which are obviously "practical problems" with importance for sun's products, but none of which could be applied to the products under development at the time. corporate labs are multi-disciplinary and have more resources. as regards the first of these, the authors use google as an example: researching neural networks requires an interdisciplinary team. domain specialists (e.g. linguists in the case of machine translation) define the problem to be solved and assess performance; statisticians design the algorithms, theorize on their error bounds and optimization routines; computer scientists search for efficiency gains in implementing the algorithms. not surprisingly, the “google translate” paper has coauthors, many of them leading researchers in their respective fields again, i would agree. a breadth of disciplines was definitely a major contributor to parc's successes. as regards extra resources, i think this is a bigger factor than arora et al do. as i wrote in falling research productivity revisited: the problem of falling research productivity is like the "high energy physics" problem - after a while all the experiments at a given energy level have been done, and getting to the next energy level is bound to be a lot more expensive and difficult each time. information technology at all levels is suffering from this problem. for example, nvidia got to its first working silicon of a state-of-the-art gpu on $ . m from the vcs, which today wouldn't even buy you a mask set. even six years ago system architecture research, such as berkeley's aspire project, needed to build (or at least simulate) things like this: firebox is a kw wsc building block containing a thousand compute sockets and petabytes ( ^ b) of non-volatile memory connected via a low-latency, high-bandwidth optical switch. ... each compute socket contains a system-on-a-chip (soc) with around cores connected to high-bandwidth on-package dram. clearly, ai research needs a scale of data and computation that only a very large company can afford. for example, waymo's lead in autonomous vehicles is based to a large extent on the enormous amount of data that has taken years of a fleet of vehicles driving all day, every day to accumulate. large corporate labs may generate significant external benefits. by "external benefits", arora et al mean benefits to society and the broader economy, but not to the lab's host company: one well-known example is provided by xerox parc. xerox parc developed many fundamental inventions in pc hardware and software design, such as the modern personal computer with graphical user interface. however, it did not significantly benefit from these inventions, which were instead largely commercialized by other firms, most notably apple and microsoft. while xerox clearly failed to internalize fully the benefits from its immensely creative lab ... it can hardly be questioned that the social benefits were large, with the combined market capitalization of apple and microsoft now exceeding . trillion dollars. two kinds of company form these external benefits. parc had both spin-offs, in which xerox had equity, and startups that built on their ideas and hired their alumni but in which they did not. xerox didn't do spin-offs well: as documented by chesbrough ( , ), the key problem there was not xerox’s initial equity position in the spin-offs, but xerox’s practices in managing the spin-offs, which discouraged experimentation by forcing xerox researchers to look for applications close to xerox’s existing businesses. but cisco is among the examples of how spin-offs can be done well, acting as an internal vc to incentivize a team by giving them equity in a startup. if it was successful, cisco would later acquire it. sun microsystems is an example of exceptional fertility in external startups. nvidia was started by a group of frustrated sun engineers. it is currently worth almost times what oracle paid to acquire sun. it is but one entry in a long list of such startups whose aggregate value dwarfs that of sun at its peak. as arora et al write: a surprising implication of this analysis is that the mismanagement of leading firms and their labs can sometimes be a blessing in disguise. the comparison between fairchild and texas instruments is instructive. texas instruments was much better managed than fairchild but also spawned far fewer spin-offs. silicon valley prospered as a technology hub, while the cluster of dallas-fort worth semiconductor companies near texas instruments, albeit important, is much less economically significant. an important additional external benefit that arora et al ignore is the open source movement, which was spawned by bell labs and the at&t consent decree. at&t was forced to license the unix source code. staff at institutions, primarily universities, which had signed the unix license could freely share code enhancements. this sharing culture grew and led to the bsd and gnu licenses that underlie the bulk of today's computational ecosystem. jamie powell was right that arora et al have produced an extremely valuable contribution to the study of the decay of the vital link between r&d and productivity of the overall economy. posted by david. at : am labels: anti-trust, big data, intellectual property, unix, venture capital comments: miguel said... a lab is a cost, so it's externalized and sold, until someone buys it, and on again waiting for the next buyer. probably the way to account it should be different, taking into account other non-appearing benefits for the mother company (brand/know-how/reputation/etc...) rgds!m may , at : am unknown said... a thought ... the business model changing to "providing services" instead of "selling products" will, perhaps, again shift the research back to corporations. r&d in that case makes a more visible contribution to the bottom line profits within the company. i also would like to add my opinion that the academia cannot fully be of service to the large economy when their funding is so tightly controlled and directed to whatever political idea is flying around for the moment. however,,, this was a great read! may , at : am alang said... i worked in corporate r&d labs for years in the 's and early 's at gte labs and digital equipment corporation. a large issue we constantly dealt with was technology transfer to other more business-oriented parts of the company. the technology we were trying to transfer was ground breaking and state-of-the-art, but was also often not at a production usage level. and the staff in the receiving organizations often did not have masters or phd level computer science training, though they were quite proficient oat mis. as a result, they were not well equipped to receive k of lisp code written within an object-oriented framework that ran on this "weird (to them) lisp machine. so there was always this technology transfer disconnect. as researchers, we published extensively, so we contributed to the greater good of computer science advancement. and the level of publication was a large part of how we ere measured, as in academia. but it would have also been gratifying to see more use made of the cool technology we were producing. usage was not zero percent, but i don't think it exceeded - % either. may , at : pm david. said... matthew hutson reports more evidence for falling research productivity, this time in ai, in eye-catching advances in some ai fields are not real: "researchers are waking up to the signs of shaky progress across many subfields of ai. a meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in .” another study in reproduced seven neural network recommendation systems, of the kind used by media streaming services. it found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. in another paper posted on arxiv in march, kevin musgrave, a computer scientist at cornell university, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since . may , at : am cem kaner said... i worked in silicon valley (software industry) for years, then as a full professor of software engineering for years. i retired at the end of . i ran a research lab at school and spent a lot of time arranging funding. my lab, and several others that i knew, were perfectly willing to create a research stream that applied / extended our work in a direction needed by a corporation. the government grants kept us at a pretty theoretical level. mixing corporate and government work let us explore some ideas much more thoroughly. the problem with corporate funding was that almost every corporate representative who wanted to contract with us wanted to pay minimum wage to my students and nothing to me--and they wanted all rights (with the scope "all" very broadly defined). they seemed to assume that i was so desperate for funding that i would agree to anything. negotiating with these folks was unpleasant and unproductive. several opportunities for introducing significant improvements in the efficiency and effectiveness of software engineering efforts were squandered because the large corporations who contacted me wanted the equivalent of donations, not research contracts. i heard similar stories from faculty members at my school and other schools. it is possible for university researchers to steer some of their work into directions that are immediately commercially useful. but we can't do it with government research money because that's not what they agree to pay for. and we can't do it for free. and we can't agree to terms that completely block graduate students from continuing to work on the research that they focused their graduate research on because that would destroy their careers. companies that won't make a serious effort to adddress those issues won't get much from university researchers. but don't pin that failure on the university. june , at : am godfree roberts said... meanwhile, no-one cares that china outspends us : on research. we have a real missile gap opportunity to get our r&d back on track. june , at : pm david. said... why corporate america gave up on r&d by kaushik viswanath is an interview with ashish arora and sharon belenzon, the authors of the paper behind this post. july , at : am jrminter said... i spent years in an analytical sciences division that, depending upon the whim of manamagement, was either loosely or tightly integrated with the research labs. our mission was to help researchers and technologists understand the chemistry and performance of the materials that they were integrating into products. it was facinating and rewarding work. the instruments and the skills needed to properly interpret the results made it a frequent target for budget cuts when money was tight. our clients valued the data because it helped understand the chemistry and materials science that determined how well the final product would perform. august , at : pm dwarkesh patel said... excellent post! i'm still not sure how antitrust plays into this. doesn't the value of research increase to a firm if they have many acquired companies which could make use of that research? august , at : pm david. said... in the long-gone days when anti-trust was enforced, firms could not buy competitors to acquire their newly-developed products. so they had to invest in their own internal research and development, or they wouldn't have any new products. now that there is no anti-trust enforcement, there's no point in spending dollars that could go to stock buybacks on internal research labs or developing new products. instead, you let the vcs fund r&d, and if their bets pay off, you buy the company. see for example the pharma industry. in software it is particularly effective, because you can offer the vc-funded startup a choice between being bought at a low valuation, or facing a quick-and-dirty clone backed by a much bigger company with a much bigger user base. see for example facebook. august , at : pm unknown said... after spending years in silicon valley r&d, i am working for toyota on zero emission vehicle technology, and i can tell you that the japanese have not abandoned the corporate research model. i don't know if it is tradition or necessity, or simply that it works for them, but it is refreshingly old fashioned. in my mind, economics as a measure of success is as good as any other metric, because it represents a sort of minimization of effort, energy, what have you. r&d will always be resource limited in some way, electricity, money, time, personnel; and so we have to learn to be efficient within our constraints. the yin to that yang, is that innovation can not happen outside of a creative environment. it is the responsibility, and the sole responsibility, of leadership to maintain a dynamic balance between creativity/innovation and resource constraint. august , at : pm david. said... rob beschizza's explore an abandoned research lab points to this video, which provides a suitable coda for the post. september , at : am david. said... ex-google boss eric schmidt: us 'dropped the ball' on innovation by karishma vaswani starts: "in the battle for tech supremacy between the us and china, america has "dropped the ball" in funding for basic research, according to former google chief executive eric schmidt. ... for example, chinese telecoms infrastructure giant huawei spends as much as $ bn (£ . bn) on research and development - one of the highest budgets in the world. this r&d is helping chinese tech firms get ahead in key areas like artificial intelligence and g." september , at : am david. said... daron acemoglu makes good points in antitrust alone won’t fix the innovation problem, including: "in terms of r&d, the mckinsey global institute estimates that just a few of the largest us and chinese tech companies account for as much as two-thirds of global spending on ai development. moreover, these companies not only share a similar vision of how data and ai should be used (namely, for labor-replacing automation and surveillance), but they also increasingly influence other institutions, such as colleges and universities catering to tens of thousands of students clamoring for jobs in big tech. there is now a revolving door between leading institutions of higher education and silicon valley, with top academics often consulting for, and sometimes leaving their positions to work for, the tech industry." october , at : pm blissex said... as a late comment, most corporate management realized that researchers in corporate labs were too old and too expensive, with permanent positions, plus benefits, pensions, etc. and decided to go for much lower cost alternatives: * a lot of research labs in cheaper offshore locations, with younger researcher not demanding high wages and pensions and benefits, and much easier to fire. * a lot of research was outsourced, via research grants, to universities, casualizing research work, because universities can put together very cheaply teams of young, hungry phds and postdocs on low pay and temporary contracts, also thanks to an enormous increase in the number of phd and postdoc positions, also thanks to industry outsourcing contracts. january , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ▼  may ( ) the death of corporate research labs economics of decentralized storage carl malamud wins (mostly) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. google cloud offers a model for fixing google’s product-killing reputation | ars technica skip to main content biz & it tech science policy cars gaming & culture store forums subscribe close navigate store subscribe videos features reviews rss feeds mobile site about ars staff directory contact us advertise with ars reprints filter by topic biz & it tech science policy cars gaming & culture store forums settings front page layout grid list site theme black on white white on black sign in comment activity sign up or login to join the discussions! stay logged in | having trouble? sign up to comment and more sign up google's learning! — google cloud offers a model for fixing google’s product-killing reputation gcp offers a stability promise that the rest of the company could learn from. ron amadeo - jul , : pm utc enlarge / google cloud platform, no longer perpetually under construction? reader comments with posters participating share this story share on facebook share on twitter share on reddit google's reputation for aggressively killing products and services is hurting the company's brand. any new product launch from google is no longer a reason for optimism; instead, the company is met with questions about when the product will be shut down. it's a problem entirely of google's own making, and it's yet another barrier that discourages customers from investing (either time, money, or data) in the latest google thing. the wide public skepticism of google stadia is a great example of the problem. a google division with similar issues is google cloud platform, which asks companies and developers to build a product or service powered by google's cloud infrastructure. like the rest of google, cloud platform has a reputation for instability, thanks to quickly deprecating apis, which require any project hosted on google's platform to be continuously updated to keep up with the latest changes. google cloud wants to address this issue, though, with a new "enterprise api" designation. further reading google’s constant product shutdowns are damaging its brand enterprise apis basically get a roadmap that promises stability for certain apis. google says, "the burden is on us: our working principle is that no feature may be removed (or changed in a way that is not backwards compatible) for as long as customers are actively using it. if a deprecation or breaking change is inevitable, then the burden is on us to make the migration as effortless as possible." if google needs to change an api, customers will now get a minimum of one year's notice, along with tools, documentation, and other materials. google goes on to say, "to make sure we follow these tenets, any change we introduce to an api is reviewed by a centralized board of product and engineering leads and follows a rigorous product lifecycle evaluation." despite being one of the world's largest internet companies and basically defining what modern cloud infrastructure looks like, google isn't doing very well in the cloud infrastructure market. analyst firm canalys puts google in a distant third, with percent market share, behind microsoft azure ( percent) and market leader amazon web services ( percent). rumor has it (according to a report from the information) that google cloud platform is facing a deadline to beat aws and microsoft, or it will risk losing funding. advertisement ex-googler steve yegge laid out the problems with google cloud platform last year in a post titled "dear google cloud: your deprecation policy is killing you." google's announcement seems to hit most of what that post highlights, like a lack of documentation and support, an endless treadmill of api upgrades, and google cloud's general disregard for backward compatibility. yegge argues that successful platforms like windows, java, and android (a group yegge says is isolated from the larger google culture) owe much of their success to their commitment to platform stability. aws is the market leader partly because it's considered a lot more stable than google cloud platform. google cloud gets it protocol reports that google vp kripa krishnan was asked during the announcement if she is familiar with the "killed by google" website and twitter account, both run by cody ogden. the report says krishnan "couldn't help but laugh," and she said, "it was pretty apparent to us from many sources on the internet that we were not doing well." google cloud platform's awareness of google's reputation, its steps to limit disruption to customers, and its communication of which offerings are more stable than others have created a model for the rest of the company. many google products suffer from the specter of unceremonious shutdowns, and that's enough to force customers to seek alternatives. the primary fix to the problem is simply mitigation—i.e., stop shutting so many things down all the time. but a close second would be communication—just tell customers your plans for future support. google seems to have no problem offering a public roadmap for the software it ships on hardware devices. pixel phones and chromebooks both have public support statements for their software, showing a minimum date for which the devices can count on support. for instance, we know a pixel will continue to receive updates until at least october . google can't do anything to immediately solve its reputation for killing products and services, but communication can help relieve some of the hesitation users and companies increasingly feel when investing in a google product. if the company doesn't plan on killing a product for a long time, it should say so! google should tell users and company partners which products are stable and which ones are fly-by-night experiments. of course, for this idea to work, google has to actually stick to any public commitments it makes so people can trust it will follow through. recently, the company has not done this. it promised three years of support for android things, the internet-of-things version of android. instead, google ended os updates after only one year. if the company really wants to fix its reputation for instability, it will need to prove itself to customers over time. reader comments with posters participating share this story share on facebook share on twitter share on reddit ron amadeo ron is the reviews editor at ars technica, where he specializes in android os and google products. he is always on the hunt for a new gadget and loves to rip things apart to see how they work. email ron@arstechnica.com // twitter @ronamadeo advertisement you must login or create an account to comment. channel ars technica ← previous story next story → related stories today on ars store subscribe about us rss feeds view mobile site contact us staff advertise with us reprints newsletter signup join the ars orbital transmission mailing list to get weekly updates delivered to your inbox. sign me up → cnmn collection wired media group © condé nast. all rights reserved. use of and/or registration on any portion of this site constitutes acceptance of our user agreement (updated / / ) and privacy policy and cookie statement (updated / / ) and ars technica addendum (effective / / ). ars may earn compensation on sales from links on this site. read our affiliate link policy. your california privacy rights | do not sell my personal information the material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of condé nast. ad choices dshr's blog: gini coefficients of cryptocurrencies dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, october , gini coefficients of cryptocurrencies the gini coefficient expresses a system's degree of inequality or, in the blockchain context, centralization. it therefore factors into arguments, like mine, that claims of blockchains' decentralization are bogus. in his testimony to the us senate committee on banking, housing and community affairs' hearing on “exploring the cryptocurrency and blockchain ecosystem" entitled crypto is the mother of all scams and (now busted) bubbles while blockchain is the most over-hyped technology ever, no better than a spreadsheet/database, nouriel roubini wrote: wealth in crypto-land is more concentrated than in north korea where the inequality gini coefficient is . (it is . in the quite unequal us): the gini coefficient for bitcoin is an astonishing . . the link is to joe weisenthal's how bitcoin is like north korea from nearly five years ago, which was based upon a stack exchange post, which in turn was based upon a post by the owner of the bitcoinica exchange from ! which didn't look at all holdings of bitcoin, let alone the whole of crypto-land, but only at bitcoinica's customers! follow me below the fold as i search for more up-to-date and comprehensive information. i'm not even questioning how roubini knows the gini coefficient of north korea to two decimal places. most cryptocurrencies will start with a gini coefficient of ; satoshi nakamoto mined the first million bitcoin. as adoption spreads, the gini coefficient will decrease naturally. the question isn't whether, but how fast it will decrease. on steem, ckfrpark is concerned that it hasn't decreased anything like quickly enough: cryptocurrency as of september , has not been narrowing the gap between the rich and poor but also is aggravating the inequality in our society. based on the three major cryptocurrency wallets, bitcoin, ethereum and ripple, top % shares the property value of the rest %, resulting in a drastic figure of gini coefficient of . . if we consider those who do not own a cryptocurrency wallet, it would result as radical figure of over . gini coefficient. ... the greater the value of cryptocurrency, the greater the gap between rich and poor, which governments and people will not tolerate. it is a prophecy that existing cryptocurrencies will fail. balaji s. srinivasan is cto of coinbase, a cryptocurrency insider. in july last year, together with leland lee, he wrote quantifying decentralization, arguing for the importance of measuring decentralization: the primary advantage of bitcoin and ethereum over their legacy alternatives is widely understood to be decentralization. however, despite the widely acknowledged importance of this property, most discussion on the topic lacks quantification. if we could agree upon a quantitative measure, it would allow us to: measure the extent of a given system’s decentralization determine how much a given system modification improves or reduces decentralization design optimization algorithms and architectures to maximize decentralization source srinivasan and lee start with an explanation of the gini coefficient and the lorenz curve from which it is derived. they go on to make the important point that a decentralized system is compromised if any of its decentralized subsystems is compromised, identifying six subsystems of cryptocurrencies; mining, exchanges, client, nodes, developers and ownership. only the last has been the focus of most discussion of cryptocurrency gini coefficients. source they plot lorenz curves for each of their six subsystems for bitcoin and ethereum, and derive these gini coefficients: subsystem bitcoin ethereum mining . . client . . developer . . exchange . . node . . owner . . these are rather large gini coefficients, but in the case of the only one that roubini and others have focused on, the distribution of wealth, it vastly underestimates the problem: one important point: if we actually include all billion people on the earth, most of whom have zero btc or ethereum, the gini coefficient is essentially . +. and if we just include all balances, we include many dust balances which would again put the gini coefficient at . +. thus, we need some kind of threshold here. the imperfect threshold we picked was the gini coefficient among accounts with ≥ btc per address, and ≥ eth per address. so this is the distribution of ownership among the bitcoin and ethereum rich with >$ k as of july . in other words, even among the "whales" the distribution of wealth is extremely unequal (though not actually as unequal as north korea). this, incidentally, explains the enthusiasm of the whalier ethereum whales for "proof of stake" as a consensus mechanism. they could afford to control etherum's blockchain by staking a small fraction of their wealth. the reason why decentralization is attractive is that, if it were actually achieved in practice, it would make compromising the system very difficult. srinivasan and lee go on to point out that the gini coefficient, while indicative, isn't a good measure of the vulnerability of a decentralized system to compromise. instead, they propose: the nakamoto coefficient is the number of units in a subsystem you need to control % of that subsystem. it’s not clear that % is the number to worry about for each system, so you can pick a number and calculate it based on what you believe the critical threshold is. it’s also not clear which subsystems matter. regardless, having a measure is an essential first step and here are the nakamoto coefficients of each subsystem: source they compute the nakamoto coefficients of bitcoin and ethereum, as shown in this table. subsystem bitcoin ethereum mining client developer exchange node owner these are interesting numbers: source they show that ethereum ("market cap" $ b) is significantly more vulnerable than bitcoin ("market cap" $ b), reinforcing the observation that the gini coefficient of the top cryptocurrencies' "market cap" at . is extremely high. the smaller cryptocurrencies are very vulnerable to % attacks. even ethereum  currently suffers from the "selfish mining" attack, which has been known since . they show the risk posed by software monocultures, driven by network effects and economies of scale. these risks were illustrated by the recent major bug in bitcoin core. ether miners / / even ignoring the fact that bitmain: operates the world’s largest and second largest bitcoin mining pools in terms of computing power, btc.com and antpool. so the for bitcoin mining should be , and that: two major mining pools, ethpool and ethermine, publicly reveal that they share the same admin they show that economies of scale mean mining pools are very concentrated. and that proving the or pools aren't colluding is effectively impossible. only the top wallets hold % of the bitcoin held by the whales, and only the top wallets hold % of the ether held by the whales. there just aren't a lot of whales. gini coefficient based wealth distribution in the bitcoin network: a case study by manas gupta and parth gupta was published behind springer's obnoxious paywall last july. alas, the data upon which it is based is from , so it despite its recent publication it is only slightly less out-of-date than the data that roubini quoted. an important, and much more up-to-date study of several different measures of decentralization (but that doesn't use the gini coefficient) is decentralization in bitcoin and ethereum networks by adem efe gencer, soumya basu, ittay eyal, robbert van renesse and emin gün sirer: in bitcoin, the weekly mining power of a single entity has never exceeded % of the overall power. in contrast, the top ethereum miner has never had less than % of the mining power. moreover, the top four bitcoin miners have more than % of the average mining power. on average, % of the weekly power was shared by only three ethereum miners. these observations suggest a slightly more centralized mining process in ethereum. although miners do change ranks over the observation period, each spot is only contested by a few miners. in particular, only two bitcoin and three ethereum miners ever held the top rank. the same mining pool has been at the top rank for % of the time in bitcoin and % of the time in ethereum. over % of the mining power has exclusively been shared by eight miners in bitcoin and five miners in ethereum throughout the observed period. even % of the mining power seems to be controlled by only miners in bitcoin and only miners in ethereum. this shows how incredibly poor proof-of-work is at decentralization compared with conventional distributed database technology: these results show that a byzantine quorum system of size could achieve better decentralization than proof-of-work mining at a much lower resource cost. this shows that further research is necessary to create a permissionless consensus protocol without such a high degree of centralization source raul at howmuch.net published an analysis of the wealth distribution among bitcoin wallets a year ago. it didn't compute a gini coefficient but it did claim that only wallets owned . % of bitcoin: there are a couple limitations in our data. most importantly, each address can represent more than one individual person. an obvious example would be a bitcoin exchange or wallet, which hold the currency for a lot of different people. another limitation has to do with anonymity. if you want to remain completely anonymous, you can use something called coinjoin, a process that allows users to group similar transactions together. this makes it seem like two people are using the same address, when in reality they are not. bambouclub tweeted a superficially similar analysis at about the same time, again without computing a gini coefficient,  but you had to read down into the tweet chain to discover it wasn't based on analyzing wallets at all. but on assuming that the bitcoin distribution matched the global distribution of wealth. hannah murphy's bitcoin: who really owns it, the whales or small fry? reports, based on data from chainalysis, that in the december "pump and dump": $ b pump and dump longer-term holders sold at least $ billion worth of bitcoin to new speculators over the december to april period, with half of this movement taking place in december alone. “this was an exceptional transfer of wealth,” says philip gradwell, chainalysis’ chief economist, who dubs the past six months as bitcoin’s “liquidity event”. echo in icos gradwell argues that this sudden injection of liquidity – the amount of bitcoin available for trading rose by close to per cent over that period – has been a “fundamental driver” behind the recent price decline. at the same time, bitcoin trading volumes have now fallen in tandem with the prices, from close to $ billion daily in december to $ billion today. as far as i know no-one has measured by how much this transfer of wealth from later to early adopters, and bitcoin in the reverse direction, will have decreased the gini coefficient. or how much the transfer of cryptocurrency from speculators to ico promoters will have increased it. posted by david. at : am labels: bitcoin, fault tolerance comment: david. said... analyzing etheruem's contract topology by lucianna kiffer, dave levin and alan mislove (also here) reinforces the message of the nakamoto coefficient: "ethereum’s smart contract ecosystem has a considerable lack of diversity. most contracts reuse code extensively, and there are few creators compared to the number of overall contracts. ... the high levels of code reuse represent a potential threat to the security and reliability. ethereum has been subject to high-profile bugs that have led to hard forks in the blockchain (also here) or resulted in over $ million worth of ether being frozen; like with dns’s use of multiple implementations, having multiple implementations of core contract functionality would introduce greater defense-in-depth to ethereum." november , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ▼  october ( ) controlled digital lending syndicating journal publisher content gini coefficients of cryptocurrencies betteridge's law violation software heritage foundation update i'm shocked, shocked to find collusion going on click on the llama i don't really want to stop the show brief talk at internet archive event bitcoin's academic pedigree ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: china's cryptocurrency crackdown dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, june , china's cryptocurrency crackdown btc "price" cryptocurrency mining in sichuan, especially in the rainy season, is hydro-powered, so miners thought they'd be spared the chinese government's crackdown, for example in qinghai, xinjiang and yunnan. but they were rapidly disabused of this idea, as matt novak reports in bitcoin plunges as china's sichuan province pulls plug on crypto mining: btc miners' revenue bitcoin continued its dramatic plunge to $ , monday morning, down . % from a week earlier as some of china’s largest bitcoin mining farms were shut down over the weekend. the bitcoin mining facilities of sichuan province received an order on friday to stop doing business by sunday, according to chinese state media outlet the global times. the sichuan provincial development and reform commission and the sichuan energy bureau issued an order to all electricity companies in the region on friday to stop supplying electricity to any known crypto mining organizations, including firms that had already been publicly identified, btc hash rate as the "price" chart shows, the crackdown is having an impact. the result of this is that miners' revenue has taken a hit. the result of this is to squeeze uneconomic mining out of the mining pools, decreasing the hash rate. the result of that is that the network has to adapt, by reducing the difficulty of mining the next block in order to maintain the six blocks an hour target for the bitcoin blockchain, averaged over time. below the fold, more details and graphs. btc difficulty clearly, the decline of miners' revenue, and thus the hash rate, and thus the difficulty of mining blocks, has a long way to go before it significantly decreases the security of the bitcoin blockchain. but this could be the start of a self-reinforcing cycle leading in that direction as the current uncertainty and the decline in the "price" as the dump follows the pump cause hodl-ers to hodl. avg transaction fee this decrease in the demand for transactions can be seen from the collapse in average fee per transaction since the peak around $ to under $ . this in itself contributes to the drop in miners' revenue, and thus to the drop in the hash rate, and so on. miners desperate to recoup some of their investment in hardware are shipping it to destinations outside china: videos on social media sites purported to show miners in sichuan turning off their mining machines and packing up their businesses. miners in china are now looking to sell their equipment overseas, and it appears many have already found buyers. cnbc’s eunice yoon tweeted early monday that a chinese logistics firm was shipping , lbs ( , kilograms) of crypto mining equipment to an unnamed buyer in maryland for just $ . per kilogram. the more of the hash rate that is located in western countries, using power that is much more expensive than under-the-counter hydro-power in china, the worse the economics of mining. the more of the hash rate that is located in countries that follow fincen's guidance, the more effective nicholas weaver's suggestion: it is time to seriously disrupt the cryptocurrency ecology. directly attacking mining as incompatible with the bank secrecy act is one potentially powerful tool. weaver means that classifying mining as money transmission would force miners in countries that follow fincen to adhere to the anti-money laundering/know your customer (aml/kyc) rules. this would make mining in these countries effectively illegal, as being either legally risky or impossibly expensive, and thereby help to suppress ransomware. postscript: if you think that bitcoin makes sense as a currency used to buy and sell you need to justify the graph below, which shows the total cost of making a single transaction, the average value transferred to the miners for each transaction, has been over $ for a year and peaked at $ . source posted by david. at : am labels: bitcoin no comments: post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ▼  june ( ) taleb on cryptocurrency economics china's cryptocurrency crackdown dna data storage: a different approach mining is money transmission (updated) mempool flooding meta: apology to commentors unreliability at scale unstoppable code? ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: blockchain briefing for dod dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, july , blockchain briefing for dod i was asked to deliver blockchain: what's not to like? version . to a department of defense conference-call. i took the opportunity to update the talk, and expand it to include some of the "additional material" from the original, and from the podcast. below the fold, the text of the talk with links to the sources. the yellow boxes contain material that was on the slides but was not spoken. [slide ] it’s one of these things that if people say it often enough it starts to sound like something that could work, sadhbh mccarthy i'd like to thank jen snow for giving me the opportunity to talk about blockchain technology and cryptocurrencies. the text of my talk with links to the sources is up on my blog, so you don't need to take notes. there's been a supernova of hype about them. almost everything positive you have heard is paid advertising, and should be completely ignored. why am i any more credible? first, i'm retired. no-one is paying me to speak, and i have no investments in cryptocurrencies or blockchain companies. [slide ] this is not to diminish nakamoto's achievement but to point out that he stood on the shoulders of giants. indeed, by tracing the origins of the ideas in bitcoin, we can zero in on nakamoto's true leap of insight—the specific, complex way in which the underlying components are put together. bitcoin's academic pedigree, arvind narayanan and jeremy clark second, i've been writing skeptically about cryptocurrencies and blockchain technology for more than five years. what are my qualifications for such a long history of pontification? nearly sixteen years ago, about five years before satoshi nakamoto published the bitcoin protocol, a cryptocurrency based on a decentralized consensus mechanism using proof-of-work, my co-authors and i won a "best paper" award at the prestigious sosp workshop for a decentralized consensus mechanism using proof-of-work. it is the protocol underlying the lockss system. the originality of our work didn't lie in decentralization, distributed consensus, or proof-of-work. all of these were part of the nearly three decades of research and implementation leading up to the bitcoin protocol, as described by arvind narayanan and jeremy clark in bitcoin's academic pedigree. our work was original only in its application of these techniques to statistical fault tolerance; nakamoto's only in its application of them to preventing double-spending in cryptocurrencies. we're going to start by walking through the design of a system to perform some function, say monetary transactions, storing files, recording reviewers' contributions to academic communication, verifying archival content, whatever. my goal is to show you how the pieces fit together in such a way that the problems the technology encounters in practice aren't easily fixable; they are inherent in the underlying requirements. being of a naturally suspicious turn of mind, you don't want to trust any single central entity, but instead want a decentralized system. you place your trust in the consensus of a large number of entities, which will in effect vote on the state transitions of your system (the transactions, reviews, archival content, ...). you hope the good entities will out-vote the bad entities. in the jargon, the system is trustless (a misnomer). techniques using multiple voters to maintain the state of a system in the presence of unreliable and malign voters were first published in the byzantine generals problem by lamport et al in . alas, byzantine fault tolerance (bft) requires a central authority to authorize entities to take part. in the blockchain jargon, it is permissioned. you would rather let anyone interested take part, a permissionless system with no central control. [slide ] in the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. the meaning of decentralization, vitalik buterin, co-founder of ethereum the security of your permissionless system depends upon the assumption of uncoordinated choice, the idea that each voter acts independently upon its own view of the system's state. if anyone can take part, your system is vulnerable to sybil attacks, in which an attacker creates many apparently independent voters who are actually under his sole control. if creating and maintaining a voter is free, anyone can win any vote they choose simply by creating enough sybil voters. [slide ] from a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power, ... in contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) ... analogously to how a lock on a door increases the security of a house by more than the cost of the lock. the economic limits of bitcoin and the blockchain, eric budish, booth school, university of chicago so creating and maintaining a voter has to be expensive. permissionless systems can defend against sybil attacks by requiring a vote to be accompanied by a proof of the expenditure of some resource. this is where proof-of-work comes in; a concept originated by cynthia dwork and moni naor in . in a bft system, the value of the next state of the system is that computed by the majority of the nodes. in  a proof-of-work system such as bitcoin, the value of the next state of the system is that computed by the first node to solve a puzzle. there is no guarantee that any other node computed that value; bft is a consensus system whereas bitcoin-type systems select a winning node. proof-of-work is a random process, but at scale the probability of being selected is determined by how quickly you can compute hashes. the idea is that the good voters will spend more on hashing power, and thus compute more useless hashes, than the bad voters. [slide ] the blockchain trilemma much of the innovation in blockchain technology has been aimed at wresting power from centralised authorities or monopolies. unfortunately, the blockchain community’s utopian vision of a decentralised world is not without substantial costs. in recent research, we point out a ‘blockchain trilemma’ – it is impossible for any ledger to fully satisfy the three properties shown in [the diagram] simultaneously ... in particular, decentralisation has three main costs: waste of resources, scalability problems, and network externality inefficiencies. the economics of blockchains, markus k brunnermeier & joseph abadi, princeton brunnermeir and abadi's blockchain trilemma shows that a blockchain has to choose at most two of the following three attributes: correctness decentralization cost-efficiency obviously, your system needs the first two, so the third has to go. running a voter (mining in the jargon) in your system has to be expensive if the system is to be secure. no-one will do it unless they are rewarded. they can't be rewarded in "fiat currency", because that would need some central mechanism for paying them. so the reward has to come in the form of coins generated by the system itself, a cryptocurrency. to scale, permissionless systems need to be based on a cryptocurrency; the system's state transitions will need to include cryptocurrency transactions in addition to records of files, reviews, archival content, whatever. your system needs names for the parties to these transactions. there is no central authority handing out names, so the parties need to name themselves. as proposed by david chaum in they can do so by generating a public-private key pair, and using the public key as the name for the source or sink of each transaction. [slide ] we created a small bitcoin wallet, placed it on images in our honeyfarm, and set up monitoring routines to check for theft. two months later our monitor program triggered when someone stole our coins. this was not because our bitcoin was stolen from a honeypot, rather the graduate student who created the wallet maintained a copy and his account was compromised. if security experts can't safely keep cryptocurrencies on an internet-connected computer, nobody can. if bitcoin is the "internet of money," what does it say that it cannot be safely stored on an internet connected computer? risks of cryptocurrencies, nicholas weaver, u.c. berkeley in practice this is implemented in wallet software, which stores one or more key pairs for use in transactions. the public half of the pair is a pseudonym. unmasking the person behind the pseudonym turns out to be fairly easy in practice. the security of the system depends upon the user and the software keeping the private key secret. this can be difficult, as nicholas weaver's computer security group at berkeley discovered when their wallet was compromised and their bitcoins were stolen. [slide ] -yr bitcoin "price" history the capital and operational costs of running a miner include buying hardware, power, network bandwidth, staff time, etc. bitcoin's volatile "price", high transaction fees, low transaction throughput, and large proportion of failed transactions mean that almost no legal merchants accept payment in bitcoin or other cryptocurrency. thus one essential part of your system is one or more exchanges, at which the miners can sell their cryptocurrency rewards for the "fiat currency" they need to pay their bills. who is on the other side of those trades? the answer has to be speculators, betting that the "price" of the cryptocurrency will increase. thus a second essential part of your system is a general belief in the inevitable rise in "price" of the coins by which the miners are rewarded. if miners believe that the "price" will go down, they will sell their rewards immediately, a self-fulfilling prophesy. over time, permissionless blockchains require an inflow of speculative funds at an average rate greater than the current rate of mining rewards if the "price" is not to collapse. to maintain bitcoin's price at $ k would require an inflow of $ k/hour, or about $ b from now until the next reward halving around may th .  [slide ] ether miners / / can we really say that the uncoordinated choice model is realistic when % of the bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? the meaning of decentralization, vitalik buterin in order to spend enough to be secure, say $ k/hour, you need a lot of miners. it turns out that a third essential part of your system is a small number of “mining pools”. a year ago bitcoin had the equivalent of around m antminer s s, and a block time of minutes. each s , costing maybe $ k, could expect a reward about once every years. it would be obsolete in about a year, so only in would ever earn anything. to smooth out their income, miners join pools, contributing their mining power and receiving the corresponding fraction of the rewards earned by the pool. these pools have strong economies of scale, so successful cryptocurrencies end up with a majority of their mining power in - pools. each of the big pools can expect a reward every hour or so. these blockchains aren’t decentralized, but centralized around a few large pools. at multiple times in one mining pool controlled more than % of the bitcoin mining power. at almost all times since - pools have controlled the majority of the bitcoin mining power. currently two of them, with . % of the power, are controlled by bitmain, the dominant supplier of mining asics. with the advent of mining-as-a-service, % attacks have become endemic among the smaller alt-coins. the security of a blockchain depends upon the assumption that these few pools are not conspiring together outside the blockchain; an assumption that is impossible to verify in the real world (and by murphy's law is therefore false). [slide ] since then there have been other catastrophic bugs in these smart contracts, the biggest one in the parity ethereum wallet software ... the first bug enabled the mass theft from "multisignature" wallets, which supposedly required multiple independent cryptographic signatures on transfers as a way to prevent theft. fortunately, that bug caused limited damage because a good thief stole most of the money and then returned it to the victims. yet, the good news was limited as a subsequent bug rendered all of the new multisignature wallets permanently inaccessible, effectively destroying some $ m in notional value. this buggy code was largely written by gavin wood, the creator of the solidity programming language and one of the founders of ethereum. again, we have a situation where even an expert's efforts fell short. risks of cryptocurrencies, nicholas weaver, u.c. berkeley in practice the security of a blockchain depends not merely on the security of the protocol itself, but on the security of both the core software, and the wallets and exchanges used to store and trade its cryptocurrency. this ancillary software has bugs, such as last september's major vulnerability in bitcoin core, the parity wallet fiasco, the routine heists using vulnerabilities in exchange software, and the wallet that was sending user's pass-phrases to the google spell-checker over http. who doesn't need their pass-phrase spell-checked? recent game-theoretic analysis suggests that there are strong economic limits to the security of cryptocurrency-based blockchains. to guarantee safety, the total value of transactions in a block needs to be less than the value of the block reward, which kind of spoils the whole idea. your system needs an append-only data structure to which records of the transactions, files, reviews, archival content, whatever are appended. it would be bad if the miners could vote to re-write history, undoing these records. in the jargon, the system needs to be immutable (another misnomer). [slide ] merkle tree (source) the necessary data structure for this purpose was published by stuart haber and w. scott stornetta in . a company using their technique has been providing a centralized service of securely time-stamping documents for nearly a quarter of a century. it is a form of merkle or hash tree, published by ralph merkle in . for blockchains it is a linear chain to which fixed-size blocks are added at regular intervals. each block contains the hash of its predecessor; a chain of blocks. the blockchain is mutable, it is just rather hard to mutate it without being detected, because of the merkle tree’s hashes, and easy to recover, because there are lots of copies keeping stuff safe. but this is a double-edged sword. immutability makes systems incompatible with the gdpr, and immutable systems to which anyone can post information will be suppressed by governments. [slide ] btc transaction fees cryptokitties’ popularity exploded in early december and had the ethereum network gasping for air. ... ethereum has historically made bold claims that it is able to handle unlimited decentralized applications  ... the crypto-kittie app has shown itself to have the power to place all network processing into congestion. ... at its peak [cryptokitties] likely only had about , daily users. neopets, a game to which cryptokitties is often compared, once had as many as million users. how crypto-kitties disrupted the ethereum network, open trading network a user of your system wanting to perform a transaction, store a file, record a review, whatever, needs to persuade miners to include their transaction in a block. miners are coin-operated; you need to pay them to do so. how much do you need to pay them? that question reveals another economic problem, fixed supply and variable demand, which equals variable "price". each block is in effect a blind auction among the pending transactions. so lets talk about cryptokitties, a game that bought the ethereum blockchain to its knees despite the bold claims that it could handle unlimited decentralized applications. how many users did it take to cripple the network? it was far fewer than non-blockchain apps can handle with ease; cryptokitties peaked at about k users. neopets, a similar centralized game, peaked at about , times as many. cryptokitties average "price" per transaction spiked % between november and december as the game got popular, a major reason why it stopped being popular. the same phenomenon happened during bitcoin's price spike around the same time. cryptocurrency transactions are affordable only if no-one wants to transact; when everyone does they immediately become un-affordable. nakamoto's bitcoin blockchain was designed only to support recording transactions. it can be abused for other purposes, such as storing illegal content. but it is likely that you need additional functionality, which is where ethereum's "smart contracts" come in. these are fully functional programs, written in a javascript-like language, embedded in ethereum's blockchain. they are mainly used to implement ponzi schemes, but they can also be used to implement initial coin offerings, games such as cryptokitties, and gambling parlors. further, in on-chain vote buying and the rise of dark daos philip daian and co-authors show that "smart contracts" also provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. [slide ] ico returns the first big smart contract, the dao or decentralized autonomous organization, sought to create a democratic mutual fund where investors could invest their ethereum and then vote on possible investments. approximately % of all ethereum ended up in the dao before someone discovered a reentrancy bug that enabled the attacker to effectively steal all the ethereum. the only reason this bug and theft did not result in global losses is that ethereum developers released a new version of the system that effectively undid the theft by altering the supposedly immutable blockchain. risks of cryptocurrencies, nicholas weaver, u.c. berkeley "smart contracts" are programs, and programs have bugs. some of the bugs are exploitable vulnerabilities. research has shown that the rate at which vulnerabilities in programs are discovered increases with the age of the program. the problems caused by making vulnerable software immutable were revealed by the first major "smart contract". the decentralized autonomous organization (the dao) was released on th april , but on th may dino mark, vlad zamfir, and emin gün sirer posted a call for a temporary moratorium on the dao, pointing out some of its vulnerabilities; it was ignored. three weeks later, when the dao contained about % of all the ether in circulation, a combination of these vulnerabilities was used to steal its contents. the loot was restored by a "hard fork", the blockchain's version of mutability. since then it has become the norm for "smart contract" authors to make them "upgradeable", so that bugs can be fixed. "upgradeable" is another way of saying "immutable in name only". [slide ] security researchers from srlabs revealed that a large chunk of the ethereum client software that runs on ethereum nodes has not yet received a patch for a critical security flaw the company discovered earlier this year. "according to our collected data, only two thirds of nodes have been patched so far," said karsten nohl, ... "the parity ethereum has an automated update process - but it suffers from high complexity and some updates are left out," nohl said. all of these issues put all ethereum users at risk, and not just the nodes running unpatched versions. the number of unpatched notes may not be enough to carry out a direct % attack, but these vulnerable nodes can be crashed to reduce the cost of a % attack on ethereum, currently estimated at around $ , per hour. ...  "the patch gap signals a deep-rooted mistrust in central authority, including such any authority that can automatically update software on your computer." a large chunk of ethereum clients remain unpatched catalin cimpanu it isn't just the "smart contracts" that need to be upgradeable, it is the core software for the blockchain. bugs and vulnerabilities are inevitable. if you trust a central authority to update your software automatically, or if you don't but you think others do, what is the point of a permissionless blockchain? [slide ] permissionless systems trust: the core developers of the blockchain software not to write bugs. the developers of your wallet software not to write bugs. the developers of the exchanges not to write bugs. the operators of the exchanges not to manipulate the markets or to commit fraud. the developers of your upgradeable "smart contracts" not to write bugs. the owners of the smart contracts to keep their secret key secret. the owners of the upgradeable smart contracts to avoid losing their secret key. the owners and operators of the dominant mining pools not to collude. the operators of miners to apply patches in a timely manner. the speculators to provide the funds needed to keep the “price” going up. users' ability to keep their secret key secret. users’ ability to avoid losing their secret key. other users not to transact when you want to. so, this is the list of people your permissionless system has to trust if it is going to work as advertised over the long term. you started out to build a trustless, decentralized system but you have ended up with: a trustless system that trusts a lot of people you have every reason not to trust. a decentralized system that is centralized around a few large mining pools that you have no way of knowing aren’t conspiring together. an immutable system that either has bugs you cannot fix, or is not immutable a system whose security depends on it being expensive to run, and which is thus dependent upon a continuing inflow of funds from speculators. a system whose coins are convertible into large amounts of "fiat currency" via irreversible pseudonymous transactions, which is thus an irresistible target for crime. if the “price” keeps going up, the temptation for your trust to be violated is considerable. if the "price" starts going down, the temptation to cheat to recover losses is even greater. maybe it is time for a re-think. suppose you give up on the idea that anyone can take part and accept that you have to trust a central authority to decide who can and who can’t vote. you will have a permissioned system. the first thing that happens is that it is no longer possible to mount a sybil attack, so there is no reason running a node need be expensive. you can use bft to establish consensus, as ibm’s hyperledger, the canonical permissioned blockchain system plans to. you need many fewer nodes in the network, and running a node just got way cheaper. overall, the aggregated cost of the system got orders of magnitude cheaper. now there is a central authority it can collect “fiat currency” for network services and use it to pay the nodes. no need for cryptocurrency, exchanges, pools, speculators, or wallets, so much less temptation for bad behavior. [slide ] permissioned systems trust: the central authority. the software developers. the owners and operators of the nodes. the secrecy of a few private keys. this is now the list of entities you trust. trusting a central authority to determine the voter roll has eliminated the need to trust a whole lot of other entities. the permissioned system is more trustless and, since there is no need for pools, the network is more decentralized despite having fewer nodes. [slide ] faults replicas a byzantine quorum system of size could achieve better decentralization than proof-of-work mining at a much lower resource cost. decentralization in bitcoin and ethereum networks, adem efe gencer, soumya basu, ittay eyal, robbert van renesse and emin gün sirer how many nodes does your permissioned blockchain need? the rule for bft is that f + nodes can survive f simultaneous failures. that's an awful lot fewer than you need for a permissionless proof-of-work blockchain. what you get from bft is a system that, unless it encounters more than f simultaneous failures, remains available and operating normally. the problem with bft is that if it encounters more than f simultaneous failures, the state of the system is irrecoverable. if you want a system that can be relied upon for the long term you need a way to recover from disaster. successful permissionless blockchains have lots of copies keeping stuff safe, so recovering from a disaster that doesn't affect all of them is manageable. [slide ] source so in addition to implementing bft you need to back up the state of the system each block time, ideally to write-once media so that the attacker can't change it. but if you're going to have an immutable backup of the system's state, and you don't need continuous uptime, you can rely on the backup to recover from failures. in that case you can get away with, say, replicas of the blockchain in conventional databases, saving even more money. i've shown that, whatever consensus mechanism they use, permissionless blockchains are not sustainable for very fundamental economic reasons. these include the need for speculative inflows and mining pools, security linear in cost, economies of scale, and fixed supply vs. variable demand. proof-of-work blockchains are also environmentally unsustainable. the top cryptocurrencies are estimated to use as much energy as the netherlands. this isn't to take away from nakamoto's ingenuity; proof-of-work is the only consensus system shown to work well for permissionless blockchains. the consensus mechanism works, but energy consumption and emergent behaviors at higher levels of the system make it unsustainable. [slide ] mentions in s&p quarterlies still new to nyc, but i met this really cool girl. energy sector analyst or some such. four dates in, she uncovers my love for bitcoin. completely ghosted. zack voell s&p companies are slowly figuring out that there is no there there in blockchains and cryptocurrencies, and they're not the only ones: so if both permissionless and permissioned blockchains are fatally flawed, and experts in both cryptography and economics have been saying so for many years, how come they are generally perceived as huge successes? the story starts in the early s with david chaum. his work on privacy was an early inspiration for the cypherpunks. many of the cypherpunks were libertarians, so the idea of money not controlled by governments was attractive. but chaum's pioneering digicash was centralized, a fatal flaw in their eyes. it would be two decades before the search for a practical decentralized cryptocurrency culminated with nakamoto's bitcoin. [slide ] bitcoin failed at every one of nakamoto's aspirations here. the price is ridiculously volatile and has had multiple bubbles; the unregulated exchanges (with no central bank backing) front-run their customers, paint the tape to manipulate the price, and are hacked or just steal their user's funds; and transaction fees and the unreliability of transactions make micropayments completely unfeasible. david gerard a parallel but less ideological thread was the idea that the business model for the emerging internet was micropayments. this was among the features nakamoto touted for bitcoin in early , despite the idea having been debunked by clay shirky in and andrew odlyzko in . in fact, none of nakamoto's original goals worked out in practice. but nakamoto was not just extremely clever in the way he assembled the various component technologies into a cryptocurrency, he also had exceptionally good timing. his paper was posted on st october , and met three related needs: just days earlier, on th september lehman brothers had gone bankrupt, precipitating the global financial crisis (the gfc). the gfc greatly increased the demand for flight capital in china. mistaking pseudonymity for anonymity, vendors and customers on the dark web found bitcoin a reassuring means of exchange. a major reason bitcoin was attractive to the libertarian cypherpunks was that many were devotees of the austrian economics cult. because there would only ever be million bitcoin, they believed that, like gold, the price would inevitably increase. consider a currency whose price is doomed to increase. it is a mechanism for transferring wealth from later adopters, called suckers, to early adopters, called geniuses. and the cypherpunks were nothing if not early adopters of technology. sure enough, a few of the geniuses turned into "whales", hodl-ing the vast majority of the bitcoin. the gini coefficient of cryptocurrencies is an interesting research question; it is huge but probably less than nouriel roubini's claim of . . the whales needed to turn large amounts of cryptocurrency in their wallets into large numbers in a bank account denominated in "fiat currency". to do this they needed to use an exchange to sell cryptocurrency to a sucker for dollars, and then transfer the dollars from the exchange's bank account into their bank account. [slide ] we’ve had banking hiccups in the past, we’ve just always been able to route around it or deal with it, open up new accounts, or what have you … shift to a new corporate entity, lots of cat and mouse tricks. phil potter of the bitfinex exchange. fowler opened numerous u.s.-based business bank accounts at several different banks, and in opening and using these accounts fowler and yosef falsely represented to those banks that the accounts would be primarily used for real estate investment transactions even though fowler and yosef knew that the accounts would be used, and were in fact used, by fowler, yosef and others to transmit funds on behalf of an unlicensed money transmitting business related to the operation of cryptocurrency exchanges. us vs. reginald fowler and ravid yosef for the exchange to have a bank account, it had to either conform to or evade the "know your customer/anti-money laundering" laws. the whole point of cryptocurrencies is to avoid dealing with banks and laws such as kyc/aml, so most exchanges chose to evade kyc/aml by a cat-and-mouse game of fraudulent accounts. once the banks caught on to the cat-and-mouse game, most exchanges could not trade cryptocurrency for fiat currency. to continue, they needed a "stablecoin", a cryptocurrency fixed against the us dollar as a substitute for actual dollars. the guys behind bitfinex, one of the sketchier exchanges, invented tether. they claimed their usdt was backed one-for-one by usd, promising an audit would confirm this. but before an audit appeared they fired their auditors. earlier this year, after the new york attorney general sued them, they claimed it was % backed by usd (except when they accidentally create billion usdt), and revealed an m usd hole in bitfinex' accounts. [slide ] "approximately % of this volume is fake and/or non-economic in nature, and that the real market for bitcoin is significantly smaller, more orderly, and more regulated than commonly understood." bitwise asset management's detailed comments to the sec about btc/usdt trading on unregulated exchanges. according to blockchain.info, about $ m worth of bitcoin was traded on friday on the main dollar-based exchanges. which sounds decent until you notice that about $ bn worth of tether was traded on friday, according to coinmarketcap. jemima kelly, ft alphaville there were many usdt exchanges, and competition was intense. customers wanted the exchange with the highest volume for their trades, so these exchanges created huge volumes of wash trades to inflate their volume. around % of all cryptocurrency trades are fake. [slide ] "an upset mt. gox creditor analyses the data from the bankruptcy trustee’s sale of bitcoins. he thinks he’s demonstrated incompetent dumping by the trustee — but actually shows that a “market cap” made of million btc can be crashed by selling , btc, over months, at market prices, which suggests there is no market." david gerard. because there was so little real trading between cryptocurrencies and usd, trades of the size the whales needed would crash the price. it was thus necessary to pump the price before even part of their hodlings could be dumped on the suckers. [slide ] p&ds have dramatic short-term impacts on the prices and volumes of most of the pumped tokens. in the first seconds after the start of a p&d, the price increases by % on average, trading volume increases times, and the average -second absolute return reaches %. a quick reversal begins seconds after the start of the p&d. ... for an average p&d, investors make one bitcoin (about $ , ) in profit, approximately one-third of a token’s daily trading volume. the trading volume during the minutes before the pump is % of the total volume during the minutes after the pump. this implies that an average trade in the first minutes after a pump has a % chance of trading against these insiders and on average they lose more than % ( %* %). cryptocurrency pump-and-dump schemes tao li, donghwa shin and baolian wang off-chain collusion among cryptocurrency traders allows for extremely profitable pump-and-dump schemes, especially given the thin trading in "alt-coins". but the major pumps, such as the one currently under way, come from the creation of huge volumes of usdt, in this case about one billion usdt per month. [slide ] issuance of usdt [in april] there were about $ billion worth of tethers on the market. since then, tether has gone on a frenzied issuance spree. in the month of may, the stablecoin company pumped out $ billion worth of tethers into the market. and this month, it is on track for another $ billion. currently, there are roughly $ . billion worth of tethers sloshing around in the bitcoin markets. whether this money is backed by real dollars is anyone's guess. amy castor who would believe that pushing a billion "dollars" a month that can only be used to buy cryptocurrency into the market might cause people to buy cryptocurrency and drive the price up? if we believe bitfinex that the % of usdt that isn't usd is in cryptocurrencies, that might provide a motive for a massive pump to recover, say, m usd in losses. i want to end by talking about a technology with important implications for software supply chain security that looks like, but isn't a blockchain. [slide ] a green padlock (with or without an organization name) indicates that: you are definitely connected to the website whose address is shown in the address bar; the connection has not been intercepted. the connection between firefox and the website is encrypted to prevent eavesdropping. how do i tell if my connection to a website is secure? mozilla how do i know that i'm talking to the right web site? because there's a closed padlock icon in the url bar, right? the padlock icon appears when the browser has verified that the connection to the url in the url bar supplied a certificate for the site in question carrying a signature chain ending in one of the root certificates the browser trusts. browsers come with a default list of root certificates from certificate authorities (cas). my current firefox browser trusts root certificates from unique organizations, among them foreign governments but not the us government. some of the organizations whose root certificates my browser trusts are known to have abused this trust, allowing miscreants to impersonate sites, spy on users and sign malware so it appears to be coming from, for example, microsoft or apple. [slide ] a crucial technical property of the https authentication model is that any ca can sign certificates for any domain name. in other words, literally anyone can request a certificate for a google domain at any ca anywhere in the world, even when google itself has contracted one particular ca to sign its certificate. security collapse in the https market, axel arnbak et al for example, google discovered that "symantec cas have improperly issued more than , certificates". but browsers still trust symantec cas; their market share is so large the web would collapse if they didn't. as things stand, clients have no way of knowing whether the root of trust for a certificate, say for the library of congress, is the one the library intended, or a spoof from some ca in turkey or china. in google started work on an approach based on ronald reagan's "trust but verify" paradigm, called certificate transparency (ct). the basic idea is to accompany the certificate with a hash of the certificate signed by a trusted third party, attesting that the certificate holder told the third party that the certificate with that hash was current. thus in order to spoof a service, an attacker would have to both obtain a fraudulent certificate from a ca, and somehow persuade the third party to sign a statement that the service had told them the fraudulent certificate was current. clearly this is: more secure than the current situation, which requires only compromising a ca, and: more effective than client-only approaches, which can detect that a certificate has changed but not whether the change was authorized. clients now need two lists of trusted third parties, the cas and the sources of ct attestations. the need for these trusted third parties is where the blockchain enthusiasts would jump in and claim (falsely) that using a blockchain would eliminate the need for trust. in the real world it isn't feasible to solve the problem of untrustworthy cas by eliminating the need for trust. ct's approach instead is to provide a mechanism by which breaches of trust, both by the cas and by the attestors, can be rapidly and unambiguously detected. this can be done because: certificate owners obtain attestations from multiple sources, who are motivated not to conspire. clients can verify these multiple attestations. the attestors publish merkle trees of their attestations, which can be verified by their competitors. [slide ] each log operates independently. each log gets its content directly from the cas, not via replication from other logs. each log contains a subset of the total information content of the system. there is no consensus mechanism operating between the logs, so it cannot be abused by, for example, a % attack. monitoring and auditing is asynchronous to web content delivery, so denial of service against the monitors and auditors cannot prevent clients obtaining service. certificate transparency david s. h. rosenthal how do i know i'm running the right software, and no-one has implanted a backdoor? right now, there is no equivalent of ct for the signatures that purport to verify software downloads, and this is one reason for the rash of software supply chain attacks. the open source community has a long-standing effort to use ct-like techniques not merely to enhance the reliability of the signatures on downloads, but more importantly to verify that a binary download was compiled from the exact source code it claims to represent. the reason this project is taking a long time is that it is woefully under-funded, and it is a huge amount of work. it depends on ensuring that the build process for each package is reproducible, so that given the source code and the build specification, anyone can run the build and generate bit-for-bit identical results. to give you some idea of how hard this is, the uk government has been working with huawei since to make their router software builds reproducible so they know the binaries running in the uk's routers match the source code huawei disclosed. huawei expects to finish this program in . with a few million dollars in funding, in a couple of years the open source community could finish making the major linux distributions reproducible and implement ct-like assurance that the software you were running matched the source code in the repositories, with no hidden backdoors. i would think this would be something the dod would be interested in. thank you for your attention, i'm ready for questions. posted by david. at : am labels: bitcoin comments: david. said... the topics of the questions i remember were: ) use of cryptocurrency for money laundering and terrorism funding. ) enforcement actions by governments. ) use of blockchain technology by major corporations. ) pr for libertarian politics by cryptocurrency hodlers. ) relative security of decentralized vs. centralized blockchains. i will add shortly comments addressing them, with links to sources. july , at : am david. said... ) use of cryptocurrency for money laundering and terrorism funding. in general it is a bad idea to commit crimes using an immutable public blockchain. pseudonymous blockchains such as bitcoin's require extremely careful op-sec if the pseudonym is not to be linked to web trackers and cookies (see, for example, when the cookie meets the blockchain: privacy risks of web payments via cryptocurrencies by steven goldfeder et al). there are cryptocurrencies with stronger privacy features, such as zcash and monero. these are more popular among malefactors than bitcoin. but turning cryptocurrencies into fiat currency with which to buy your lamborghini while remaining anonymous faces difficulties. users of exchanges that observe kyc/aml, such as coinbase, will need to explain the source of funds to the tax authorities. the irs recently sent letters to coinbase users reminding them of their obligation to report the gains and losses on every single transaction. north korea is reputed to be very active in stealing cryptocurrency via exchange hacks and other techniques. ) enforcement actions by governments. see my post regulating cryptocurrencies and the comments to it. ) use of blockchain technology by major corporations. see, for example, blockchain for international development: using a learning agenda to address knowledge gaps by john burg, christine murphy, & jean paul pétraud. and this, from david gerard: "bundesbank and deutsche boerse try settlements on the blockchain. you’ll be amazed to hear that it was slower and more expensive. “despite numerous tests of blockchain-based prototypes, a real breakthrough in application is missing so far.” but at least it “in principle fulfilled all basic regulatory features for financial transactions.” july , at : pm david. said... ) pr for libertarian politics by cryptocurrency hodlers. john mcafee is running for us president. see also laurie penny's must-read four days trapped at sea with crypto’s nouveau riche. ) relative security of decentralized vs. centralized blockchains. as i described above, at scale anything claiming to be a "decentralized blockchain" isn't going to be decentralized. economic forces will have centralized it around a small number of mining pools. see decentralization in bitcoin and ethereum networks by adem efe gencer, soumya basu, ittay eyal, robbert van renesse and emin gün sirer. its security will depend upon those pools not conspiring together, among many other things (slide ). centralized, permissioned blockchains have fewer vulnerabilities, but their central authority is a single point of failure. iirc the questioner used the phrase " % secure". no networked computer system is ever % secure. ) i seem to remember also a question on pump-and-dump schemes. the current pump is via tether. social capital has a series explaining tether and the "stablecoin" scam: * pumps, spoofs and boiler rooms * tether, part one: the stablecoin dream * tether, part two: pokedex * tether, part three: crypto island july , at : pm david. said... north korea took $ billion in cyberattacks to fund weapons program: u.n. report by michelle nichols reports that: "north korea has generated an estimated $ billion for its weapons of mass destruction programs using “widespread and increasingly sophisticated” cyberattacks to steal from banks and cryptocurrency exchanges, according to a confidential u.n. report seen by reuters on monday." august , at : pm david. said... timothy b. lee debunks the idea of bitcoin for purchases in i tried to pay with bitcoin at a mexico city bar—it didn’t go well: "so we gave up and paid with a conventional credit card. after leaving the bar, i sent off an email to the support address listed on my receipt. the next morning, i got a response: "transactions under [ , pesos] are taking a day to two, in the course of today they will reach the wallet." i finally got my bitcoins around pm." the bar is called bitcoin embassy: "does bitcoin embassy pay its employees in bitcoin? "i always tell them i can pay you in bitcoin if you want to, but they don't want to," ortiz says." august , at : am david. said... clare duffy reports that the fed is getting into the real-time payments business: "the fed announced monday that it will develop a real-time payment service called "fednow" to help move money around the economy more quickly. it's the kind of government service that companies and consumers have been requesting for years — one that already exists in other countries. the service could also compete with solutions already developed in the private sector by big banks and tech companies. the fed itself is not setting up a consumer bank, but it has always played a behind-the-scenes role facilitating the movement of money between banks and helping to verify transactions. this new system would help cut down on the amount of time between when money is deposited into an account and when it is available for use. fednow would operate all hours and days of the week, with an aim to launch in or . " "real-time payments" are something that enthusiasts see as a competitive edge for cryptocurrencies against fiat currencies. this is strange, for two reasons: ) in most countries except the us, instantaneous inter-bank transfers have been routine for years. but the enthusiasts are so ignorant of the way the world outside the us works that they don't know this. similar ignorance was evident in the way facebook thought that libra would "bank the unbanked" in the third world. ) cryptocurrency transfers are not in practice real-time. bitcoin users are advised to wait block times (one hour) before treating a transaction as confirmed. august , at : pm david. said... jemima kelly's when bitcoin bros talk cryptography provides an excellent example of the hype surrounding cryptocurrencies. anthony pompliano, a "crypto fund manager" who "has over half his net worth in bitcoin" was talking (his book) to cnbc and: "when one of the cnbc journalists put it to pomp that just because bitcoin is scarce that doesn’t necessarily make it valuable, as “there are a lot of things that are scarce that nobody cares about”, pomp said:     of course. look, if you don’t believe in bitcoin, you’re essentially saying you don’t believe in cryptography. have a watch for yourself here (and count the seconds it takes for the others to recover from his comment, around the . mark):" the video is here. august , at : pm david. said... more on the fed's real-time payment proposal in the fed is going to revamp how americans pay for things. big banks aren’t happy from mit technology review. august , at : am david. said... trail of bits has released: "findings from the full final reports for twenty-three paid security audits of smart contract code we performed, five of which have been kept private. the public audit reports are available online, and make informative reading. we categorized all smart-contract related findings from these reports" the bottom line is that smart contracts are programs, and programs have bugs. using current automated tools can find some but not all of them. august , at : pm david. said... brenna smith's the evolution of bitcoin in terrorist financing makes interesting and somewhat scary reading: "terrorists’ early attempts at using cryptocurrencies were filled with false starts and mistakes. however, terrorists are nothing if not tenacious, and through these mistakes, they’ve grown to have a highly sophisticated understanding of blockchain technology. this investigation outlines the evolution of terrorists’ public bitcoin funding campaigns starting from the beginning and ending with the innovative solutions various groups have cooked up to make the technology work in their favor." august , at : pm david. said... larry cermak has a twitter thread that starts: "it’s now obvious that icos were a massive bubble that's unlikely to ever see a recovery. the median ico return in terms of usd is - % and constantly dropping. let's look at some data!" hat tip to david gerard. august , at : pm david. said... the abstract for the european central bank's in search for stability in crypto-assets: are stablecoins the solution? reads: "stablecoins claim to stabilise the value of major currencies in the volatile crypto-asset market. this paper describes the often complex functioning of different types of stablecoins and proposes a taxonomy of stablecoin initiatives. to this end it relies on a novel framework for their classification, based on the key dimensions that matter for crypto-assets, namely: (i) accountability of issuer, (ii) decentralisation of responsibilities, and (iii) what underpins the value of the asset. the analysis of different types of stablecoins shows a trade-off between the novelty of the stabilisation mechanism used in an initiative (from mirroring the traditional electronic money approach to the alleged introduction of an “algorithmic central bank”) and its capacity to maintain a stable market value. while relatively less innovative stablecoins could provide a solution to users seeking a stable store of value, especially if legitimised by the adherence to standards that are typical of payment services, the jury is still out on the potential future role of more innovative stablecoins outside their core user base." august , at : am david. said... david gerard writes: "tethers as erc- tokens on the ethereum blockchain are so popular that they’re flooding ethereum with transactions, and clogging the blockchain — “yesterday i had to wait and half hours for a standard transfer to go through.” ethereum is the world computer, as long as you don’t try to use it for any sort of real application. another million tethers were also printed today." september , at : pm david. said... david gerard has been researching libra, and has two posts up on the topic. today's is switzerland’s guidance on stablecoins — what it means for facebook’s libra: "libra will need to register as a bank and as a payment provider (a money transmitter). it probably won’t need to register as a collective investment scheme for retail investors. finma notes explicitly: “the highest international anti-money laundering standards would need to be ensured throughout the entire ecosystem of the project” — and that libra in particular requires an “internationally coordinated approach.” so the effective consequence is that libra will be a coin for well-documented end users in highly regulated rich countries, and not so available in poorer ones." yesterday's was your questions about facebook libra — as best as we can answer them as yet (my emphasis): "as i write this, calibra.com, the big splash page for calibra, doesn’t work in firefox — only in chrome. this is how companies behave toward products they don’t really take seriously. facebook also forgot to buy the obvious typo, colibra.com — which is a domain squatter holding page." facebook is under mounting anti-trust pressure, both in the us and elsewhere, and it is starting to look like cost-of-doing-business fines are no longer the worst that can happen. my take on libra is that facebook is floating it as a bargaining chip - in the inevitable negotiations on enforcement measures facebook can sacrifice libra to protect more valuable assets. september , at : pm david. said... claire jones and izabella kaminska's libra is imperialism by stealth points out that, in practice, currency-backed stablecoins like libra and ( % of) tether are tied to the us dollar. argentina and zimbabwe are just two examples showing how bad an idea dollarizing your economy is: "a common criticism against dollarisation (and currency blocs) is that they are a form of neocolonialism, handing global powers -- whether they are states or tech behemoths -- another means of exercising control over more vulnerable players. stablecoins backed by dollar assets are part of the same problem, which is why we believe their adoption in places like argentina would constitute imperialism by stealth." september , at : am david. said... dan goodin writes about a statement from the us treasury announcing sanctions against north korean hacking groups: "north korean hacking operations have also targeted virtual asset providers and cryptocurrency exchanges, possibly in an attempt to obfuscate revenue streams used to support the countries weapons programs. the statement also cited industry reports saying that the three north korean groups likely stole about $ million in cryptocurrency from five exchanges in asia between january and september . news agencies including reuters have cited a united nations report from last month that estimated north korean hacking has generated $ billion for the country’s weapons of mass destruction programs." september , at : pm david. said... tether slammed as “part-fraud, part-pump-and-dump, and part-money laundering” by jemima kelly suggests some forthcoming increase in transparency about tether: "a class-action lawsuit was filed against tether, bitfinex (a sister crypto exchange), and a handful of others. the suit was made public on monday, having been filed on saturday in court of the southern district of new york by vel freedman and kyle roche. notably, they are the same lawyers who recently (and successfully) sued craig wright on behalf of ira kleiman." october , at : am david. said... the abstract of cryptodamages: monetary value estimates of the air pollution and human health impacts of cryptocurrency mining by goodkind et al reads: "cryptocurrency mining uses significant amounts of energy as part of the proof-of-work time-stamping scheme to add new blocks to the chain. expanding upon previously calculated energy use patterns for mining four prominent cryptocurrencies (bitcoin, ethereum, litecoin, and monero), we estimate the per coin economic damages of air pollution emissions and associated human mortality and climate impacts of mining these cryptocurrencies in the us and china. results indicate that in , each $ of bitcoin value created was responsible for $ . in health and climate damages in the us and $ . in china. the similar value in china relative to the us occurs despite the extremely large disparity between the value of a statistical life estimate for the us relative to that of china. further, with each cryptocurrency, the rising electricity requirements to produce a single coin can lead to an almost inevitable cliff of negative net social benefits, absent perpetual price increases. for example, in december , our results illustrate a case (for bitcoin) where the health and climate change “cryptodamages” roughly match each $ of coin value created. we close with discussion of policy implications." october , at : am david. said... ian allison's foreign exchange giant cls admits: no, we don’t need a blockchain for that starts: "blockchain technology is nice to have, but it’s hardly a must for rewiring the global financial markets. so says alan marquard, chief strategy and development officer at cls group, the global utility for settling foreign exchange trades, owned by the largest banks active in that market. nearly a year ago, it went live with clsnet, touted as “the first global fx market enterprise application running on blockchain in production,” with megabanks goldman sachs, morgan stanley, and bank of china (hong kong) on board. clsnet was built on hyperledger fabric, the enterprise blockchain platform developed by ibm. but a blockchain was not the obvious solution for netting down high volumes of fx trades in currencies, marquard said recently." october , at : pm david. said... preston byrne's fear and loathing on the blockchain: leibowitz et al. v. ifinex et al. is a must-read summary of the initial pleadings in the civil case just filed against tether and bitfinex. byrne explains how the risks for the defendants are different from earlier legal actions: "being a civil case, protections bitfinex might be able to rely on in other contexts, such as the fourth amendment in any criminal action, arguing that the martin act doesn't confer jurisdiction over bitfinex's activities, or arguing that an administrative subpoena served on it by the new york attorney general is overbroad, won't apply here. discovery has the potential to be broader and deeper than bitfinex has shown, to date, that it is comfortable with. the consequence of defaulting could be financially catastrophic. the burden of proof is lower, too, than it would be with a criminal case (balance of probabilities rather than beyond a reasonable doubt)." october , at : pm david. said... david gerard provides some good advice: "if you’re going to do crimes, don’t do them on a permanent immutable public ledger of all transactions — and especially, don’t do crimes reprehensible enough that everyone gets together to come after you." from the chainalysis blog: "today, the department of justice announced the shutdown of the largest ever child pornography site by amount of material stored, along with the arrest of its owner and operator. more than site users across countries have also been arrested so far. most importantly, as of today, at least minors were identified and rescued from their abusers as a result of this investigation. u.s. attorney jessie k. liu put it best: “children around the world are safer because of the actions taken by u.s. and foreign law enforcement to prosecute this case and recover funds for victims.” commenting on the investigation itself, irs-criminal investigations chief don fort mentioned the importance of the sophisticated tracing of bitcoin transactions in order to identify the administrator of the website. we’re proud to say that chainalysis products provided assistance in this area, helping investigators analyze the website’s cryptocurrency transactions that ultimately led to the arrests. ... when law enforcement shut down the site, they siezed over terabytes of child pornography, making it one of the largest siezures of its kind. the site had . million bitcoin addresses registered. between and , the site received nearly $ , worth of bitcoin across thousands of individual transactions." october , at : pm david. said... tim swanson has updated his post from august entitled how much electricity is consumed by bitcoin, bitcoin cash, ethereum, litecoin, and monero? which concluded as much as the netherlands. in have pow blockchains become less resource intensive? he concludes that: "in aggregate, based on the numbers above, these five pow coins likely consume between . billion kwh and . billion kwh annually. that’s somewhere around switzerland on the low end to finland or pakistan near the upper end. it is likely much closer to the upper bound because the calculations above all assumed little energy loss ‘at the wall’ when in fact there is often % or more energy loss depending on the setup. this is a little lower than last year, where we used a similar method and found that these pow networks may consume as much resources as the netherlands. why the decline? all of it is due to the large decline in coin prices over the preceding time period. again, miners will consume resources up to the value of a block reward wherein the marginal cost to mine equals the marginal value of the coin (mc=mv)." october , at : pm david. said... what's blockchain actually good for, anyway? for now, not much by gregory barber has a wealth of examples of blockchain hype fizzling but, being a journalist, he can't bring himself to reach an actual conclusion: "“decentralized” projects represent a tiny portion of corporate blockchain efforts, perhaps percent, says apolline blandin, a researcher at the cambridge centre for alternative finance. the rest take shortcuts. so-called permissioned blockchains borrow ideas and terms from bitcoin, but cut corners in the name of speed and simplicity. they retain central entities that control the data, doing away with the central innovation of blockchains. blandin has a name for those projects: “blockchain memes.” hype and lavish funding fueled many such projects. but often, the same applications could be built with less-splashy technology. as the buzzwords wear off, some have begun asking, what’s the point?" november , at : pm david. said... patrick mckenzie's tether: the story so far is now the one-stop go-to explainer for tether and bitfinex: "a friend of mine, who works in finance, asked me to explain what tether was. short version: tether is the internal accounting system for the largest fraud since madoff. read on for the long version." you need to follow his advice. november , at : pm david. said... a lone bitcoin whale likely fueled surge, study finds by matthew leising and matt robinson reports on an update to 's is bitcoin really un-tethered?: "one entity on the cryptocurrency exchange bitfinex appears capable of sending the price of bitcoin higher when it falls below certain thresholds, according to university of texas professor john griffin and ohio state university’s amin shams. griffin and shams, who have updated a paper they first published in , say the transactions rely on tether, a widely used digital token that is meant to hold its value at $ ." november , at : pm david. said... today's news emphasizes that using "trustless" systems requires trusting a lot more than just the core software. first, dan goodin's official monero website is hacked to deliver currency-stealing malware: "the supply-chain attack came to light on monday when a site user reported that the cryptographic hash for a command-line interface wallet downloaded from the site didn't match the hash listed on the page. over the next several hours, users discovered that the miss-matching hash wasn't the result of an error. instead, it was an attack designed to infect getmonero users with malware. site officials later confirmed that finding." second, david gerard reports that canada’s einstein exchange shuts down — and all the money’s gone: "yet another canadian crypto exchange goes down — this time, the einstein exchange in vancouver, british columbia, canada, which was shut down by the british columbia securities commission on november. yesterday, november, the news came out that there’s nothing left at all — the money and cryptos are gone." november , at : pm david. said... jonathan syu tweets: "the whole decentralization experiment is really just an attempt to figure out what's the maximum amount of control i can maintain over a system without having any legal accountability over what happens to it." november , at : pm david. said... electronic voting machines mean you can't trust the result of the election, but there are even worse ways to elect politicians. one is internet voting. but among the very worst is to use (drum-roll) blockchain technology! to understand why it is so bad, read what we don’t know about the voatz “blockchain” internet voting system by david jefferson, duncan buell, kevin skogland, joe kiniry and joshua greenbaum. the list of unknowns covers ten pages, and every one should disqualify the system from use. november , at : pm david. said... nathaniel rich's ponzi schemes, private yachts, and a missing $ million in crypto: the strange tale of quadriga is a great illustration of the kind of people you have to trust to use trustless cryptocurrencies. the subhead reads: "when canadian blockchain whiz gerald cotten died unexpectedly last year, hundreds of millions of dollars in investor funds vanished into the crypto ether. but when the banks, the law, and the forces of reddit tried to track down the cash, it turned out the young mogul may not have been who he purported to be." it is a fascinating story - go read it. november , at : am david. said... among the people you have to trust to use a trustless system are the core developers. people like virgil griffith, a core ethereum developer. read about him in david gerard's virgil griffith arrested over north korea visit — engineer arrogance, but on the blockchain and see whether you think he's trustworthy. december , at : am david. said... i'm shocked, shocked to find illegal activities going on here!. celia wan reports that: "there are over $ million worth of illicit activities conducted via xrp, and a large portion of these activities are scams and ponzi schemes, elliptic, a blockchain analytics startup, found in its research. the uk-based startup announced on wednesday that it can now track the transactions of xrp, marking the th digital assets the firm supports. according to elliptic co-founder tom robinson, the firm is the first to have transaction monitoring capacity for xrp and it has already identified several hundred xrp accounts related to illegal activities." december , at : pm david. said... jemima kelley's when is a blockchain startup not a blockchain startup? recounts yet another company discovering that permissioned blockchains are just an inefficient way to implement things you can do with a regular database: "it’s awkward when you set up a business around a technology that you reckon is going to disrupt global finance so you name your business after said technology, send your ceo on speaking tours to evangelise about said technology, but then decide that said technology isn’t going to do anything useful for you, isn’t it? digital asset was one of the pioneers of blockchain-in-finance, with the idea that it could make clearing and trade settlement sexy again (well ok maybe not but at least make it faster and more efficient). its erstwhile ceo blythe masters, a glamorous former jp executive who was credited with/blamed for pioneering credit-default swaps, trotted around the globe telling people that blockchain was “email for money”. ... what is unique about this “blockchain start-up” is that although it is designed to work with blockchain platforms, it can actually work with any old database." december , at : am david. said... more dents in the credibility of the btc "price" from yashu gola's how a whale crashed bitcoin to sub-$ , overnight: "bitcoin lost billions of dollars worth of valuation within a -minutes timeframe as a chinese cryptocurrency scammer allegedly liquidated its steal via over-the-counter markets. the initial sell-off by plustoken caused a domino effect, causing mass liquidations. plustoken, a fraud scheme that duped investors of more than $ bn, dumped huge bitcoin stockpiles from its anonymous accounts, according to chainalysis." december , at : am david. said... plustoken scammers didn’t just steal $ + billion worth of cryptocurrency. they may also be driving down the price of bitcoin from chainalysis has the details of the plustoken scammers use of the huboi exchange to cash out the winnings from their ponzi scheme. and there is more where that came from: "they’ve cashed out at least , of that initial , eth, while the other , has been sitting untouched in a single ethereum wallet for months. the flow of the , stolen bitcoin is more complicated. so far, roughly , of it has been cashed out." december , at : pm david. said... in china electricity crackdown sparks concerns, paul muir reports on yet another way in which bitcoin mining is centralized: "a recent crackdown in china on bitcoin miners who were using electricity illegally – about , machines were seized in hebei and shanxi provinces – raises concerns about the danger of so much of the leading cryptocurrency’s hash rate being concentrated in the totalitarian country, according to a crypto industry observer. ... however, he pointed out that just four regions in china account for % of the world’s hash rate and siachen alone is responsible for %. therefore, if china decides to shut down network access, it could be very problematic." december , at : am david. said... catalin cimpanu's chrome extension caught stealing crypto-wallet private keys is yet another example of the things you have to trust to work in the "trustless" world of cryptocurrencies: "a google chrome extension was caught injecting javascript code on web pages to steal passwords and private keys from cryptocurrency wallets and cryptocurrency portals. the extension is named shitcoin wallet (chrome extension id: ckkgmccefffnbbalkmbbgebbojjogffn), and was launched last month, on december ." january , at : am david. said... in blockchain, all over your face, jemima kelly writes: "enter the “blockchain creme” from cosmetique bio naturelle suisse (translation: swiss organic natural cosmetics). we thought it must be a joke when we first heard about it ... but yet here it is, being sold on the actual internet:" january , at : am david. said... adriana hamacher's is this the end of malta's reign as blockchain island? reports on reality breaking in on malta's blockchain hype: "malta’s technicolor blockchain dream has turned an ominous shade of grey. last weekend, prime minister joseph muscat—chief architect of the tiny mediterranean island’s pioneering policies in blockchain, gaming and ai—was obliged to step down amid the crisis surrounding the murder of investigative journalist daphne caruana galizia." it appears that the government's response to her criticisms was to put a large bomb in her car: "anonymous maltese blogger bugm, one of many determined to bring caruana galizia’s killers to justice, believes that an aggressively pro-blockchain policy was seized upon by the government to distract attention from the high-profile murder investigation that ensued." the whole article is worth a read. january , at : am david. said... jill carlson's trust no one. not even a blockchain is a skeptical response to emily parker’s credulous the truth is all there is. but in focusing on "garbage in, garbage out" carlson is insufficiently skeptical about the immutability of blockchains. january , at : am david. said... david canellis reports that bitcoin gold hit by % attacks, $ k in cryptocurrency double-spent: "malicious cryptocurrency miners took control of bitcoin btc gold‘s blockchain recently to double-spend $ , worth of btg. bad actors assumed a majority of the network‘s processing power (hash rate) to re-organize the blockchain twice between thursday and friday last week: the first netted attackers , btg ($ , ), and the second roughly , btg ($ , ). cryptocurrency developer james lovejoy estimates the miners spent just $ , to perform each of the attacks, based on prices from hash rate marketplace nicehash. this marks the second and third times bitcoin gold has suffered such incidents in two years." mutating immutability can be profitable. investing $ . k to get $ k is a , % return in days. find me another investment with that rate of return! january , at : pm david. said... john nugée's what libra means for money creation points out two big problems that libra, or any private stablecoin system, would have for the economy: "the introduction of libra threatens to split the banking sector’s currently unified balance sheet, by moving a significant proportion of customer deposits (that is, the banking system’s liabilities) to the digital currency issuers, while leaving customer loans and overdrafts (the banking system’s assets) with the banks. the inevitable result of this would be to force the banks to reduce the asset side of their balance sheet to match the reduced liability side – in other words reduce their loans. this would almost certainly lead to a major credit squeeze, which would be highly damaging to economic activity." and: "it is by no means clear that such a private sector payment system would be cheaper to operate than the existing bank-based system even on the narrow point of cost per transaction, particularly if, as seems probable, one digital currency soon becomes dominant to the exclusion of competitors. but there is the wider issue of whether society is advantaged by big tech creaming off yet more money from the economy into an unaccountable, untaxable and often overseas behemoth." february , at : am david. said... yet another company doing real stuff that started out enthusiastic finds out they don't need a blockchain: "we have run several pocs integrating blockchain technology but we so far decided to run our core services without blockchain technology. meaning, the solutions that we are already providing are working fine without dlt." february , at : pm david. said... drug dealer loses codes for € . m bitcoin accounts by conor lally reports that btc are frozen in wallets for which the codes have been lost. presumably there is a fairly constant average background rate at which btc are frozen in this way, in effect being destroyed. btc are being created by the block reward, which is decreasing. eventually, the rate of creation will fall below the rate of freezing and the universe of usable btc will shrink, driving the value of the remaining btc "to the moon" february , at : am david. said... david gerard summarizes libra: "libra can only work if libra can evade regulation — and simultaneously, that no system that libra’s competing with can evade regulation. and regulators would have to let libra do this, for some reason. i’m not convinced." march , at : am david. said... trolly mctrollface's theory about the backing for tether is worth considering: "we all know miners need real cash to pay their electricity bills. they could sell their rewards on exchanges - which someone (cough tether cough) would have to buy, to prevent $btc from crashing. but when everyone knows that everyone knows, something different happens. tether has real money, because bitfinex has real money from sheering muppets dumb enough to trade on its exchange. but why would they buy miners' bitcoins, when they can loan them the money instead, and get a death grip on their balls? tether is secured by these loans, not cash. in any case, tether would be secured by loans, not cash. nobody keeps $ b in cash in a bank account, especially not the kind of bank that would accept tether's pedo laundromat money. too much credit risk. ... what if, ... you called up bitcoin miners, and offered them a lifeline, promising them to pay their electricity bills, in exchange of a small favour - a promise they won't sell their bitcoin for a while? let's put these bitcoins in escrow, or, in finance words, let's offer miners a loan secured by their bitcoins. miners are happy because they can pay their bills without going through all the trouble of selling their rewards, while tether is happy because, well, when someone owes you a lot of money, you have a metaphorical gun to his head." hat tip to david gerard, who writes: "in just six weeks, tether’s circulation has doubled to billion usdt! gosh, those stablecoins sure are popular! ... don’t believe those tether conspiracy theorists who think that . billion tethers being pumped into the crypto markets since march has anything to do with keeping the bitcoin price pumped" may , at : am david. said... jemima kelly's goldman sachs betrays bitcoin reports that goldman has seen the light: "we believe that a security whose appreciation is primarily dependent on whether someone else is willing to pay a higher price for it is not a suitable investment for our clients." may , at : am david. said... crimes on a public immutable ledger are risky, as three alleged perpetrators of the twitter hack discovered: "three individuals have been charged today for their alleged roles in the twitter hack that occurred on july , , the us department of justice has announced. ... the cyber crimes unit “analyzed the blockchain and de-anonymized bitcoin transactions allowing for the identification of two different hackers. this case serves as a great example of how following the money, international collaboration, and public-private partnerships can work to successfully take down a perceived anonymous criminal enterprise,” agent jackson said." july , at : pm david. said... implausibly good opsec is necessary if you're committing crimes on an immutable public ledger. tim cushing's fbi used information from an online forum hacking to track down one of the hackers behind the massive twitter attack reveals how mason john sheppard was exposed as one of the perpetrators when his purchase of a video game username sent bitcoin to address zsdvpv rkdiqn v v w fdqvk pdf . the fbi found this in a public database resulting from the compromise of an on-line forum: "available on various websites since approximately april . on or about april , , the fbi obtained a copy of this database. the fbi found that the database included all public forum postings, private messages between users, ip addresses, email addresses, and additional user information. also included for each user was a list of the ip addresses that user used to log into the service along with a corresponding date and timestamp." august , at : am david. said... alex de vries' bitcoin’s energy consumption is underestimated: a market dynamics approach shows that: "most of the currently used methods to estimate bitcoin’s energy demand are still prone to providing optimistic estimates. this happens because they apply static assumptions in defining both market circumstances (e.g. the price of available electricity) as well as the subsequent behavior of market participants. in reality, market circumstances are dynamic, and this should be expected to affect the preferences of those participating in the bitcoin mining industry. the various choices market participants make ultimately determines the amount of resources consumed by the bitcoin network. it will be shown that, when starting to properly consider the previous dynamics, even a conservative estimate of the bitcoin network’s energy consumption per september ( ) would be around . twh annually (comparable to a country like belgium)" tip of the hat to david gerard. august , at : pm david. said... in the tether whitepaper and you, cas piancey goes through the tether "white paper" with a fine-tooth comb: "since many tether defenders are intent on making arguments about the actual promises tether has committed to (ie; “tether isn’t a cryptocurrency!” “tether doesn’t need to be fully backed!”), it felt like the right time to run through as much of the tether whitepaper as possible. hopefully, through a long-form analysis of the whitepaper (which hasn’t been updated), we can come to conclusions about what promises tether has kept, and what promises tether has broken." one guess as to how many it has kept! hat tip to david gerard. october , at : pm david. said... in ibm blockchain is a shell of its former self after revenue misses, job cuts: sources ian allison reports that reality has dawned at ibm: "ibm has cut its blockchain team down to almost nothing, according to four people familiar with the situation. job losses at ibm escalated as the company failed to meet its revenue targets for the once-fêted technology by % this year, according to one of the sources. “ibm is doing a major reorganization,” said a source at a startup that has been interviewing former ibm blockchain staffers. “there is not really going to be a blockchain team any longer. most of the blockchain people at ibm have left.” ibm’s blockchain unit missed its revenue targets by a wide margin for two years in a row, said a second source. expectations for enterprise blockchain were too high, they said, adding that ibm “didn’t really manage to execute, despite doing a lot of announcements.” a spokesperson for ibm denied the claims." february , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ▼  july ( ) blockchain briefing for dod boeing's corporate suicide carl malamud's text mining project not to pick on toyota the eff vs. dmca section finn brunton's "digital cash" the web is a low-trust society ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. learning (lib)tech – stories from my life as a technologist skip to content learning (lib)tech stories from my life as a technologist menu about me about this blog contact me twitter github linkedin flickr rss reflection: my third year at gitlab and becoming a non-manager leader wow, years at gitlab. since i left teaching, because almost all my jobs were contracts, i haven’t been anywhere for more than years, so i find it interesting that my longest term is not only post-librarian-positions, but at a startup! year was full on pandemic year and it was a busy one. due to the travel restrictions, i took less vacation than previous years, and i’ll be trying to make up for that a little by taking this week off. wow, years at gitlab. since i left teaching, because almost all my jobs were contracts, i haven’t been anywhere for more than years, so i find it interesting that my longest term is not only post-librarian-positions, but at a startup! year was full on pandemic year and it was a busy one. due to the travel restrictions, i took less vacation than previous years, and i’ll be trying to make up for that a little by taking this week off. when i started it’s only been years, but the company has grown considerably. when i started, there were people (with team members currently who started before me), and gitlab now has over team members. july at @gitlab is starting off with some exciting news 🎉. the gitlab team is now + strong. to put that into perspective, that is % growth since feb and % growth from yr ago 😍📈 — nadiavatalidis (@nadiavat) july , with time, i learnt what it meant to live by the gitlab values, and even though the company has grown and there are "growing pains", the company hasn’t changed a lot. i’m thankful to all the people who have had a major influence on what i think it means to "live the values" though many are no longer at gitlab. i have to say @lkozloff who pushed me to be the best that i could be, giving me what i needed to always keep moving forward. also those who lead by example and whom i’ve learnt a lot about #gitlab values from: @sytses @_victorwu_ @remotejeremy @clemmakesapps @mvremmerden — cynthia "arty" ng (@therealarty) march , despite all the changes around me and the pandemic, i’ve only ever had two managers while at gitlab, which helped me settle in, grow, and influence others in turn. if you want to learn more about my first two years at gitlab, please check out my previous reflections: my first year at gitlab and becoming senior my second year at gitlab and on becoming senior again working on (o)krs after my promotion to senior last year, i started being more involved in okrs (objective key results). the okrs are owned by managers and they’re accountable for them, but as managers are supposed to do, much of the work itself can be delegated to others. i was responsible for tracking progress on a joint docs okr last year, and running the retrospective on how it went in support. i also helped revamp support onboarding by: merging the support agent (deprecated position) with outdated .com engineer onboarding, creating a console training module, reorganizing onboarding content to remove redundancies, filling in missing content, and making tasks more consistent for new team members, creating a documentation training module, creating a code contribution training module, and probably more that i’ve forgotten. more recently, i’ve started working with my manager to come up with krs that are (not okrs for the team, but) goals for me to fulfill within a specified time span (typically a quarter). for example, as part of our efforts to "coalesce" the team so that everyone works on tickets in both products (saas and self-managed), i have an epic to ensure everyone is cross-trained. training was considered out of scope of the areas of focus workgroup that came up with the changes that are coming, but it’s definitely a requirement for fully implementing the changes we want to make. so, i took on the epic as something i would lead this quarter. it’s a bit stalled at the moment waiting on managers, but i’m sure i’ll get it done within the timeline that the managers expect. while looking at the level of work i was doing, shortly after my re-promotion to senior, my manager suggested that we start working on the next promotion document. on not getting promoted to staff earlier this year, i wrote about choosing not to pursue the management track at least for now. the main reason was that going into management means shifting your focus to your direct reports, while i believe i can be successful as a manager (having been a manager before), i know that shifting back to a technical individual contributor role might be difficult for various reasons. as a result, i began to pursue a promotion to staff support engineer. discussions on #careerdevelopment are much more open @gitlab than anywhere else i've been largely, thanks to our #transparency value. if you want a glimpse into becoming a staff #support team member, the video of my discussion with @lkozloff is public! https://t.co/x hzwabd q — cynthia "arty" ng (@therealarty) january , after submitting my promotion document, it got rejected. it hit me fairly hard at the time. despite being told i was doing great work, getting the promotion rejected was a blow to my confidence. i also got the news while my manager was away, so i’m grateful to izzy fee, another support manager, who stepped in to have the "hard" conversation with me. in a follow up meeting with my senior manager, it turns out, we weren’t aligned on what we believed are the company needs for support. so, it wasn’t surprising that it got rejected. partly, it was my manager and my own fault for not writing the document to talk about how my promotion would fill a company need. so, pro-tip, if a promotion is reliant on not just merit, but also a "company need", then talk to your manager’s manager before writing your promotion document, not after. it was a really good discussion nonetheless. we agreed that while i wouldn’t be fulfilling the biggest need that we currently have (being a staff level engineer who is an expert and can lead training and ramp up of others for supporting kubernetes and highly-available self-managed instances), i could potentially fill a less urgent but still important need on increasing efficiency. the idea is to help answer question of > how could we handle double the support tickets with the same number of team members we have now? i’ve been working on a number of "support efficiency" pieces, and helping my manager with some of his okrs that focus on that as well. the best example of increased efficiency, especially with low effort, is when i figured out that limiting self-serve deletion drove up our account deletion requests. in cutting it down by at least half, i’ve saved the company an estimated support engineer’s salary’s worth. at the moment, i haven’t gone back to revising my promotion document. instead, i’ve been focused on moving ahead with training, onboarding and training others, and working on the project-type work. i’m also hoping that some of the work i’m doing this quarter will add to the document. the more i discuss the possible promotion to staff though, the more i wonder if i really want it. i’m happy doing what i do now, and i don’t want the time i’m not working on tickets and mentoring others to be solely focused on the company need that i’m supposed to be fulfilling. so, we’ll see. we’ve always said that there’s nothing wrong with staying an intermediate, so i simply need to remind myself that there’s nothing wrong with staying a senior. becoming a non-manager leader i believe my main accomplishment of the last year is becoming a leader (without having moved to the managerial level). at gitlab, we have a list of leadership competencies with a list on how they apply at the various individual contributor levels within support. i don’t review the list regularly, so it’s not like i consider it a list where i check off the boxes to say i’m doing those things. i like to think that instead, i’ve internalized a lot of the gitlab values and became a leader through helping, mentoring, and coaching others; thinking about and actioning on improvements; and most of all, leading by example. as usual, i continued to help improve training in general such as creating self-assessment quizzes for saml training and scim training, and doing a scim deep dive session. more than that though, i take opportunities to help others identify when they can take what they’ve learnt and spread the knowledge as well. primarily, i do this by helping others make documentation and handbook contributions, and i often do documentation reviews, especially for newer team members. of course, that’s just one example. i also regularly meet with other team members to talk about how they’re doing, and particularly, career path and growth. while team members are generally talking to their managers about their career path, most of the support managers we currently have started within the last . years, so don’t necessarily have the knowledge of learning opportunities, promotions, and career paths within gitlab that i’ve gained. for example, many at gitlab don’t know that we have an internship for learning program where you can "intern" for another team. being an individual contributor can also be a very different experience for some, because we are expected to be a manager of one and many aren’t used to self-leadership and very importantly, making their work known to their manager, especially at performance review time. i’m sure i could go on, but i’ll move on instead. being a leader outside of the "work" it’s important to contribute and be a leader outside of the expected work as well, which in support means outside of answering tickets, process improvements, and contributing to the product. i strive to be one of the people who are influential and make an impact as others have done for me, like those i mentioned near the beginning of this post (who outside of my manager were not part of my team). this year, i become a member of the women’s team member resource group’s learning and development subcommittee with helping to facilitate learning pathways we’ve created. there’s no official list of members, but i started attending the meetings and help with the initiatives we decide on. while i hesitated at first since i wasn’t sure i could make it, i’ve signed up to be a contribute ambassador for our next event in september. ambassadors are volunteers that do everything they can to assist the organizers make the event a success. interestingly, one of the organizers messaged me on the last day for sign ups that she was hoping to see me apply. it was certainly great to hear. i also wanted to do a fun, more social thing for the team, so at the end of last year, i organized a "secret santa" gift exchange for the support team. again, there’s probably more, and certainly some which i started in previous years that i’ve continued, but i decided to pull out just a small number of examples. the feedback on being a non-manager leader while it can be hard to gauge how much of an impact you truly have, i’ve gotten bits and pieces of feedback from both individual contributors and managers in the past few months. a number of the team members have expressed their appreciation in having a non-manager’s perspective on how to prioritize work, record work done, make work visible to their manager, and move forward in learning and career development. i often have a "manager’s perspective" because i’ve worked as one, and i do my best to apply coaching skills as well. so, i sometimes have a fairly different perspective and approach to these discussions than other individual contributors on the team. a couple of the managers have occasionally shared stories with me as well on the impact that i’ve had on their direct reports. once it was how i strategized and provided options on a possible solution to customer problem. another time was how welcoming i made a new team member feel. of course, the best anecdotes are the ones from the individuals themselves. i’ve had a team member say they look up to me and aspire to work like me. my manager told me he learns a lot from me. someone from another team recently said they remember how welcoming i was to them when they started (more than a year ago). the most memorable is still this one: chatting with one of the new support team members and had them tell me their decision to join #gitlab was due to my blog posts. apparently they read all of them and decided "that looks like a great place to work" < https://t.co/epdrtitn b — cynthia "arty" ng (@therealarty) august , from the positive feedback i’ve received so far, i like to think that the whole aspiring to be a non-manager leader thing is working. external contributions and podcasts i have always made sure to partake in some external contributions including conference volunteering and mentorship programs. somehow this past year ended up being one for podcasts as well: co-guest with amy quails from the technical writing team on the writethedocs podcast the journey into tech series guest speaker on another gitlab team member’s podcast, brown tech bae sharing knowledge and training in support engineering on customers support leaders podcast i hope to continue being involved in the technical writing, library, and support communities as the years go on. here’s to year ! this year’s post is not as linear of a story, but i believe that’s a result of going from a new(er) team member still developing and growing in a role to someone who is more settled into the role, picking out highlights of the year. we’ll see what year ’s reflection is like to prove or disprove my theory. if you made it to the end, thanks for reading! hope to see you again next year! your prize is a happy quokka. =d author cynthiaposted on june , july , categories updatetags gitlab, reflectionleave a comment on reflection: my third year at gitlab and becoming a non-manager leader ubc ischool career talk series: journey from libtech to tech the ubc ischool reached out to me recently asking me to talk about my path from getting my library degree to ending up working in a tech company. below is the script for my portion of the talk, along with a transcription of the questions i answered. continue reading “ubc ischool career talk series: journey from libtech to tech” author cynthiaposted on march , march , categories events, librarianshiptags career growth, reflectionleave a comment on ubc ischool career talk series: journey from libtech to tech choosing not to go into management (again) often, to move up and get a higher pay, you have to become a manager, but not everyone is suited to become a manager, and sometimes given the preference, it’s not what someone wants to do. thankfully at gitlab, in every engineering team including support, we have two tracks: technical (individual contributor), and management. continue reading “choosing not to go into management (again)” author cynthiaposted on february , march , categories work culturetags career growth, management, reflection comment on choosing not to go into management (again) prioritization in support: tickets, slack, issues, and more i mentioned in my gitlab reflection that prioritization has been quite different working in support compared to other previous work i’ve done. in most of my previous work, i’ve had to take “desk shifts” but those are discreet where you’re focused on providing customer service during that period of time and you can focus on other things the rest of the time. in support, we have to constantly balance all the different work that we have, especially in helping to ensure that tickets are responded to within the service level agreement (sla). it doesn’t always happen, but i ultimately try to reach inbox (with read-only items possibly left), and gitlab to-do by the end of the every week. people often ask me how i manage to do that, so hopefully this provides a bit of insight. continue reading “prioritization in support: tickets, slack, issues, and more” author cynthiaposted on december , december , categories methodologytags productivityleave a comment on prioritization in support: tickets, slack, issues, and more reflection part : my second year at gitlab and on becoming senior again this reflection is a direct continuation of part of my time at gitlab so far. if you haven’t, please read the first part before beginning this one. continue reading “reflection part : my second year at gitlab and on becoming senior again” author cynthiaposted on june , january , categories update, work culturetags gitlab, organizational culture, reflection comment on reflection part : my second year at gitlab and on becoming senior again reflection part : my first year at gitlab and becoming senior about a year ago, i wrote a reflection on summit and contribute, our all staff events, and later that year, wrote a series of posts on the gitlab values and culture from my own perspective. there is a lot that i mention in the blog post series and i’ll try not to repeat myself (too much), but i realize i never wrote a general reflection at year , so i’ve decided to write about both years now but split into parts. continue reading “reflection part : my first year at gitlab and becoming senior” author cynthiaposted on june , january , categories update, work culturetags gitlab, organizational culture, reflection comment on reflection part : my first year at gitlab and becoming senior is blog reading dead? there was a bit more context to the question, but a friend recently asked me: what you do think? is blogging dead? continue reading “is blog reading dead?” author cynthiaposted on may , may , categories updatetags reflectionleave a comment on is blog reading dead? working remotely at home as a remote worker during a pandemic i’m glad that i still have a job, that my life isn’t wholly impacted by the pandemic we’re in, but to say that nothing is different just because i was already a remote worker would be wrong. the effect the pandemic is having on everyone around you has affects your life. it seems obvious to me, but apparently that fact is lost on a lot of people. i’d expect that’s not the case for those who read my blog, but i thought it’d be worth reflecting on anyway. continue reading “working remotely at home as a remote worker during a pandemic” author cynthiaposted on may , may , categories work culturetags remoteleave a comment on working remotely at home as a remote worker during a pandemic code libbc lightning talk notes: day  code libbc day lightning talk notes! continue reading “code libbc lightning talk notes: day  ” author cynthiaposted on november , categories eventstags authentication, big data, c lbc, code, code lib, digital collections, privacy, reference, teachingleave a comment on code libbc lightning talk notes: day  code libbc lightning talk notes: day  code libbc day lightning talk notes! continue reading “code libbc lightning talk notes: day  ” author cynthiaposted on november , categories eventstags c lbc, digital collections, intranet, marc, metadata, teachingleave a comment on code libbc lightning talk notes: day  posts navigation page page … page next page cynthia technologist, librarian, metadata and technical services expert, educator, mentor, web developer, uxer, accessibility advocate, documentarian view full profile → follow us twitter linkedin github telegram search for: search categories events librarianship library academic public special tours methodology project work technology tools update web design work culture follow via email enter your email address to receive notifications of new posts by email. email address: follow about me about this blog contact me twitter github linkedin flickr rss learning (lib)tech   loading comments...   you must be logged in to post a comment. zbw labs jump to navigation english deutsch main menu news about us news data donation to wikidata, part : country/subject dossiers of the th century press archives - - by joachim neubert the world's largest public newspaper clippings archive comprises lots of material of great interest particularly for authors and readers in the wikiverse. zbw has digitized the material from the first half of the last century, and has put all available metadata under a cc license. more so, we are donating that data to wikidata, by adding or enhancing items and providing ways to access the dossiers (called "folders") and clippings easily from there. pressemappe . jahrhundert wikidata   read more about data donation to wikidata, part : country/subject dossiers of the th century press archives log in or register to post comments building the swib participants map - - by joachim neubert   here we describe the process of building the interactive swib participants map, created by a query to wikidata. the map was intended to support participants of swib to make contacts in the virtual conference space. however, in compliance with gdpr we want to avoid publishing personal details. so we choose to publish a map of institutions, to which the participants are affiliated. (obvious downside: the un-affiliated participants could not be represented on the map). we suppose that the method can be applied to other conferences and other use cases - e.g., the downloaders of scientific software or the institutions subscribed to an academic journal. therefore, we describe the process in some detail. wikidata for authorities linked data   read more about building the swib participants map log in or register to post comments journal map: developing an open environment for accessing and analyzing performance indicators from journals in economics - - by timo borst by franz osorio, timo borst introduction bibliometrics, scientometrics, informetrics and webometrics have been both research topics and practical guidelines for publishing, reading, citing, measuring and acquiring published research for a while (hood ). citation databases and measures had been introduced in the s, becoming benchmarks both for the publishing industry and academic libraries managing their holdings and journal acquisitions that tend to be more selective with a growing number of journals on the one side, budget cuts on the other. due to the open access movement triggering a transformation of traditional publishing models (schimmer ), and in the light of both global and distributed information infrastructures for publishing and communicating on the web that have yielded more diverse practices and communities, this situation has dramatically changed: while bibliometrics of research output in its core understanding still is highly relevant to stakeholders and the scientific community, visibility, influence and impact of scientific results has shifted to locations in the world wide web that are commonly shared and quickly accessible not only by peers, but by the general public (thelwall ). this has several implications for different stakeholders who are referring to metrics in dealing with scientific results:   with the rise of social networks, platforms and their use also by academics and research communities, the term 'metrics' itself has gained a broader meaning: while traditional citation indexes only track citations of literature published in (other) journals, 'mentions', 'reads' and 'tweets', albeit less formal, have become indicators and measures for (scientific) impact. altmetrics has influenced research performance, evaluation and measurement, which formerly had been exclusively associated with traditional bibliometrics. scientists are becoming aware of alternative publishing channels and both the option and need of 'self-advertising' their output. in particular academic libraries are forced to manage their journal subscriptions and holdings in the light of increasing scientific output on the one hand, and stagnating budgets on the other. while editorial products from the publishing industry are exposed to a global competing market requiring a 'brand' strategy, altmetrics may serve as additional scattered indicators for scientific awareness and value. against this background, we took the opportunity to collect, process and display some impact or signal data with respect to literature in economics from different sources, such as 'traditional' citation databases, journal rankings and community platforms resp. altmetrics indicators: citec. the long-standing citation service maintainted by the repec community provided a dump of both working papers (as part of series) and journal articles, the latter with significant information on classic impact factors such as impact factor ( and years) and h-index. rankings of journals in economics including scimago journal rank (sjr) and two german journal rankings, that are regularly released and updated (vhb jourqual, handelsblatt ranking). usage data from altmetric.com that we collected for those articles that could be identified via their digital object identifier. usage data from the scientific community platform and reference manager mendeley.com, in particular the number of saves or bookmarks on an individual paper. requirements a major consideration for this project was finding an open environment in which to implement it. finding an open platform to use served a few purposes. as a member of the "leibniz research association," zbw has a commitment to open science and in part that means making use of open technologies to as great extent as possible (the zbw - open scienc...). this open system should allow direct access to the underlying data so that users are able to use it for their own investigations and purposes. additionally, if possible the user should be able to manipulate the data within the system. the first instance of the project was created in tableau, which offers a variety of means to express data and create interfaces for the user to filter and manipulate data. it also can provide a way to work with the data and create visualizations without programming skills or knowledge. tableau is one of the most popular tools to create and deliver data visualization in particular within academic libraries (murphy ). however, the software is proprietary and has a monthly fee to use and maintain, as well as closing off the data and making only the final visualization available to users. it was able to provide a starting point for how we wanted to the data to appear to the user, but it is in no way open. challenges the first technical challenge was to consolidate the data from the different sources which had varying formats and organizations. broadly speaking, the bibliometric data (citec and journal rankings) existed as a spread sheet with multiple pages, while the altmetrics and mendeley data came from a database dumps with multiple tables that were presented as several csv files. in addition to these different formats, the data needed to be cleaned and gaps filled in. the sources also had very different scopes. the altmetrics and mendeley data covered only journals, the bibliometric data, on the other hand, had more than , journals. transitioning from tableau to an open platform was big challenge. while there are many ways to create data visualizations and present them to users, the decision was made to use r to work with the data and shiny to present it. r is used widely to work with data and to present it (kläre ). the language has lots of support for these kinds of task over many libraries. the primary libraries used were r plotly and r shiny. plotly is a popular library for creating interactive visualizations. without too much work plotly can provide features including information popups while hovering over a chart and on the fly filtering. shiny provides a framework to create a web application to present the data without requiring a lot of work to create html and css. the transition required time spent getting to know r and its libraries, to learn how to create the kinds of charts and filters that would be useful for users. while shiny alleviates the need to create html and css, it does have a specific set of requirements and structures in order to function. the final challenge was in making this project accessible to users such that they would be able to see what we had done, have access to the data, and have an environment in which they could explore the data without needing anything other than what we were providing. in order to achieve this we used binder as the platform. at it's most basic binder makes it possible to share a jupyter notebook stored in a github repository with a url by running the jupyter notebook remotely and providing access through a browser with no requirements placed on the user. additionally, binder is able to run a web application using r and shiny. to move from a locally running instance of r shiny to one that can run in binder, instructions for the runtime environment need to be created and added to the repository. these include information on what version of the language to use,  which packages and libraries to install for the language, and any additional requirements there might be to run everything. solutions given the disparate sources and formats for the data, there was work that needed to be done to prepare it for visualization. the largest dataset, the bibliographic data, had several identifiers for each journal but without journal names. having the journals names is important because in general the names are how users will know the journals. adding the names to the data would allow users to filter on specific journals or pull up two journals for a comparison. providing the names of the journals is also a benefit for anyone who may repurpose the data and saves them from having to look them up. in order to fill this gap, we used metadata available through research papers in economics (repec). repec is an organization that seeks to "enhance the dissemination of research in economics and related sciences". it contains metadata for more than million papers available in different formats. the bibliographic data contained repec handles which we used to look up the journal information as xml and then parse the xml to find the title of the journal.  after writing a small python script to go through the repec data and find the missing names there were only journals whose names were still missing. for the data that originated in an mysql database, the major work that needed to be done was to correct the formatting. the data was provided as csv files but it was not formatted such that it could be used right away. some of the fields had double quotation marks and when the csv file was created those quotes were put into other quotation marks resulting doubled quotation marks which made machine parsing difficult without intervention directly on the files. the work was to go through the files and quickly remove the doubled quotation marks. in addition to that, it was useful for some visualizations to provide a condensed version of the data. the data from the database was at the article level which is useful for some things, but could be time consuming for other actions. for example, the altmetrics data covered only journals but had almost , rows. we could use the python library pandas to go through the all those rows and condense the data down so that there are only rows with the data for each column being the sum of all rows. in this way, there is a dataset that can be used to easily and quickly generate summaries on the journal level. shiny applications require a specific structure and files in order to do the work of creating html without needing to write the full html and css. at it's most basic there are two main parts to the shiny application. the first defines the user interface (ui) of the page. it says what goes where, what kind of elements to include, and how things are labeled. this section defines what the user interacts with by creating inputs and also defining the layout of the output. the second part acts as a server that handles the computations and processing of the data that will be passed on to the ui for display. the two pieces work in tandem, passing information back and forth to create a visualization based on user input. using shiny allowed almost all of the time spent on creating the project to be concentrated on processing the data and creating the visualizations. the only difficulty in creating the frontend was making sure all the pieces of the ui and server were connected correctly. binder provided a solution for hosting the application, making the data available to users, and making it shareable all in an open environment. notebooks and applications hosted with binder are shareable in part because the source is often a repository like github. by passing a github repository to binder, say one that has a jupyter notebook in it, binder will build a docker image to run the notebook and then serve the result to the user without them needing to do anything. out of the box the docker image will contain only the most basic functions. the result is that if a notebook requires a library that isn't standard, it won't be possible to run all of the code in the notebook. in order to address this, binder allows for the inclusion in a repository of certain files that can define what extra elements should be included when building the docker image. this can be very specific such as what version of the language to use and listing various libraries that should be included to ensure that the notebook can be run smoothly. binder also has support for more advanced functionality in the docker images such as creating a postgres database and loading it with data. these kinds of activities require using different hooks that binder looks for during the creation of the docker image to run scripts. results and evaluation the final product has three main sections that divide the data categorically into altmetrics, bibliometrics, and data from mendeley. there are additionally some sections that exist as areas where something new could be tried out and refined without potentially causing issues with the three previously mentioned areas. each section has visualizations that are based on the data available. considering the requirements for the project, the result goes a long way to meeting the requirements. the most apparent area that the journal map succeeds in is its goals is of presenting data that we have collected. the application serves as a dashboard for the data that can be explored by changing filters and journal selections. by presenting the data as a dashboard, the barrier to entry for users to explore the data is low. however, there exists a way to access the data directly and perform new calculations, or create new visualizations. this can be done through the application's access to an r-studio environment. access to r-studio provides two major features. first, it gives direct access to the all the underlying code that creates the dashboard and the data used by it. second, it provides an r terminal so that users can work with the data directly. in r-studio, the user can also modify the existing files and then run them from r-studio to see the results. using binder and r as the backend of the applications allows us to provide users with different ways to access and work with data without any extra requirements on the part of the user. however, anything changed in r-studio won't affect the dashboard view and won't persist between sessions. changes exist only in the current session. all the major pieces of this project were able to be done using open technologies: binder to serve the application, r to write the code, and github to host all the code. using these technologies and leveraging their capabilities allows the project to support the open science paradigm that was part of the impetus for the project. the biggest drawback to the current implementation is that binder is a third party host and so there are certain things that are out of our control. for example, binder can be slow to load. it takes on average + minutes for the docker image to load. there's not much, if anything, we can do to speed that up. the other issue is that if there is an update to the binder source code that breaks something, then the application will be inaccessible until the issue is resolved. outlook and future work the application, in its current state, has parts that are not finalized. as we receive feedback, we will make changes to the application to add or change visualizations. as mentioned previously, there a few sections that were created to test different visualizations independently of the more complete sections, those can be finalized. in the future it may be possible to move from binderhub to a locally created and administered version of binder. there is support and documentation for creating local, self hosted instances of binder. going that direction would give more control, and may make it possible to get the docker image to load more quickly. while the application runs stand-alone, the data that is visualized may also be integrated in other contexts. one option we are already prototyping is integrating the data into our subject portal econbiz, so users would be able to judge the scientific impact of an article in terms of both bibliometric and altmetric indicators.   references william w. hood, concepcion s. wilson. the literature of bibliometrics, scientometrics, and informetrics. scientometrics , – springer science and business media llc, . link r. schimmer. disrupting the subscription journals’ business model for the necessary large-scale transformation to open access. ( ). link mike thelwall, stefanie haustein, vincent larivière, cassidy r. sugimoto. do altmetrics work? twitter and ten other social web services. plos one , e public library of science (plos), . link the zbw - open science future. link sarah anne murphy. data visualization and rapid analytics: applying tableau desktop to support library decision-making. journal of web librarianship , – informa uk limited, . link christina kläre, timo borst. statistic packages and their use in research in economics | edawax - blog of the project ’european data watch extended’. edawax - european data watch extended ( ). link journal map - binder application for displaying and analyzing metrics data about scientific journals read more about journal map: developing an open environment for accessing and analyzing performance indicators from journals in economics log in or register to post comments integrating altmetrics into a subject repository - econstor as a use case - - by wolfgang riese back in the zbw leibniz information center for economics (zbw) teamed up with the göttingen state and university library (sub), the service center of götting library federation (vzg) and gesis leibniz institute for the social sciences in the *metrics project funded by the german research foundation (dfg). the aim of the project was: “… to develop a deeper understanding of *metrics, especially in terms of their general significance and their perception amongst stakeholders.” (*metrics project about). in the practical part of the project the following dspace based repositories of the project partners participated as data sources for online publications and – in the case of econstor – also as implementer for the presentation of the social media signals: econstor - a subject repository for economics and business studies run by the zbw, currently (aug. ) containing round about , downloadable files, goescholar - the publication server of the georg-august-universität göttingen run by the sub göttingen, offering approximately , publicly browsable items so far, ssoar - the “social science open access repository” maintained by gesis, currently containing about , publicly available items. in the work package “technology analysis for the collection and provision of *metrics” of the project an analysis of currently available *metrics technologies and services had been performed. as stated by [wilsdon ], currently suppliers of altmetrics “remain too narrow (mainly considering research products with dois)”, which leads to problems to acquire *metrics data for repositories like econstor with working papers as the main content. as up to now it is unusual – at least in the social sciences and economics – to create dois for this kind of documents. only the resulting final article published in a journal will receive a doi. based on the findings in this work package, a test implementation of the *metrics crawler had been built. the crawler had been actively deployed from early to spring at the vzg. for the aggregation of the *metrics data the crawler had been fed with persistent identifiers and metadata from the aforementioned repositories. at this stage of the project, the project partners still had the expectation, that the persistent identifiers (e.g. handle, urns, …), or their local url counterparts, as used by the repositories could be harnessed to easily identify social media mentions of their documents, e.g. for econstor: handle: “hdl: /…” handle.net resolver url: “http(s)://hdl.handle.net/ /…” econstor landing page url with handle: “http(s)://www.econstor.eu/handle/ /…” econstor bitstream (pdf) url with handle: “http(s)://www.econstor.eu/bitstream/ /…” integrating altmetrics data into econstor read more about integrating altmetrics into a subject repository - econstor as a use case log in or register to post comments th century press archives: data donation to wikidata - - by joachim neubert zbw is donating a large open dataset from the th century press archives to wikidata, in order to make it better accessible to various scientific disciplines such as contemporary, economic and business history, media and information science, to journalists, teachers, students, and the general public. the th century press archives (pm ) is a large public newspaper clippings archive, extracted from more than different sources published in germany and all over the world, covering roughly a full century ( - ). the clippings are organized in thematic folders about persons, companies and institutions, general subjects, and wares. during a project originally funded by the german research foundation (dfg), the material up to has been digitized. , folders with more than two million pages up to are freely accessible online.  the fine-grained thematic access and the public nature of the archives makes it to our best knowledge unique across the world (more information on wikipedia) and an essential research data fund for some of the disciplines mentioned above. the data donation does not only mean that zbw has assigned a cc license to all pm metadata, which makes it compatible with wikidata. (due to intellectual property rights, only the metadata can be licensed by zbw - all legal rights on the press articles themselves remain with their original creators.) the donation also includes investing a substantial amount of working time (during, as planned, two years) devoted to the integration of this data into wikidata. here we want to share our experiences regarding the integration of the persons archive metadata. linked data   open data   read more about th century press archives: data donation to wikidata log in or register to post comments zbw's contribution to "coding da vinci": dossiers about persons and companies from th century press archives - - by joachim neubert at th and th of october, the kick-off for the "kultur-hackathon" coding da vinci is held in mainz, germany, organized this time by glam institutions from the rhein-main area: "for five weeks, devoted fans of culture and hacking alike will prototype, code and design to make open cultural data come alive." new software applications are enabled by free and open data. for the first time, zbw is among the data providers. it contributes the person and company dossiers of the th century press archive. for about a hundred years, the predecessor organizations of zbw in kiel and hamburg had collected press clippings, business reports and other material about a wide range of political, economic and social topics, about persons, organizations, wares, events and general subjects. during a project funded by the german research organization (dfg), the documents published up to (about , million pages) had been digitized and are made publicly accessible with according metadata, until recently solely in the "pressemappe . jahrhundert" (pm ) web application. additionally, the dossiers - for example about mahatma gandhi or the hamburg-bremer afrika linie - can be loaded into a web viewer. as a first step to open up this unique source of data for various communities, zbw has decided to put the complete pm metadata* under a cc-zero license, which allows free reuse in all contexts. for our coding da vinci contribution, we have prepared all person and company dossiers which already contain documents. the dossiers are interlinked among each other. controlled vocabularies (for, e.g., "country", or "field of activity") provide multi-dimensional access to the data. most of the persons and a good share of organizations were linked to gnd identifiers. as a starter, we had mapped dossiers to wikidata according to existing gnd ids. that allows to run queries for pm dossiers completely on wikidata, making use of all the good stuff there. an example query shows the birth places of pm economists on a map, enriched with images from wikimedia commons. the initial mapping was much extended by fantastic semi-automatic and manual mapping efforts by the wikidata community. so currently more than % of the dossiers about - often rather prominent - pm persons are linked not only to wikidata, but also connected to wikipedia pages. that offers great opportunities for mash-ups to further data sources, and we are looking forward to what the "coding da vinci" crowd may make out of these opportunities. technically, the data has been converted from an internal intermediate format to still quite experimental rdf and loaded into a sparql endpoint. there it was enriched with data from wikidata and extracted with a construct query. we have decided to transform it to json-ld for publication (following practices recommended by our hbz colleagues). so developers can use the data as "plain old json", with the plethora of web tools available for this, while linked data enthusiasts can utilize sophisticated semantic web tools by applying the provided json-ld context. in order to make the dataset discoverable and reusable for future research, we published it persistently at zenodo.org. with it, we provide examples and data documentation. a github repository gives you additional code examples and a way to address issues and suggestions. * for the scanned documents, the legal regulations apply - zbw cannot assign licenses here.   pressemappe . jahrhundert linked data   read more about zbw's contribution to "coding da vinci": dossiers about persons and companies from th century press archives log in or register to post comments wikidata as authority linking hub: connecting repec and gnd researcher identifiers - - by joachim neubert in the econbiz portal for publications in economics, we have data from different sources. in some of these sources, most notably zbw's "econis" bibliographical database, authors are disambiguated by identifiers of the integrated authority file (gnd) - in total more than , . data stemming from "research papers in economics" (repec) contains another identifier: repec authors can register themselves in the repec author service (ras), and claim their papers. this data is used for various rankings of authors and, indirectly, of institutions in economics, which provides a big incentive for authors - about , have signed into ras - to keep both their article claims and personal data up-to-date. while gnd is well known and linked to many other authorities, ras had no links to any other researcher identifier system. thus, until recently, the author identifiers were disconnected, which precludes the possibility to display all publications of an author on a portal page. to overcome that limitation, colleagues at zbw have matched a good , authors with ras and gnd ids by their publications (see details here). making that pre-existing mapping maintainable and extensible however would have meant to set up some custom editing interface, would have required storage and operating resources and wouldn't easily have been made publicly accessible. in a previous article, we described the opportunities offered by wikidata. now we made use of it. wikidata for authorities authority control   wikidata   read more about wikidata as authority linking hub: connecting repec and gnd researcher identifiers log in or register to post comments new version of multi-lingual jel classification published in lod - - by joachim neubert the journal of economic literature classification scheme (jel) was created and is maintained by the american economic association. the aea provides this widely used resource freely for scholarly purposes. thanks to andré davids (ku leuven), who has translated the originally english-only labels of the classification to french, spanish and german, we provide a multi-lingual version of jel. it's lastest version (as of - ) is published in the formats rdfa and rdf download files. these formats and translations are provided "as is" and are not authorized by aea. in order to make changes in jel tracable more easily, we have created lists of inserted and removed jel classes in the context of the skos-history project. jel klassifikation für linked open data skos-history linked data   read more about new version of multi-lingual jel classification published in lod log in or register to post comments economists in wikidata: opportunities of authority linking - - by joachim neubert wikidata is a large database, which connects all of the roughly wikipedia projects. besides interlinking all wikipedia pages in different languages about a specific item – e.g., a person -, it also connects to more than different sources of authority information. the linking is achieved by a „authority control“ class of wikidata properties. the values of these properties are identifiers, which unambiguously identify the wikidata item in external, web-accessible databases. the property definitions includes an uri pattern (called „formatter url“). when the identifier value is inserted into the uri pattern, the resulting uri can be used to look up the authoritiy entry. the resulting uri may point to a linked data resource - as it is the case with the gnd id property. this, on the one hand, provides a light-weight and robust mechanism to create links in the web of data. on the other hand, these links can be exploited by every application which is driven by one of the authorities to provide additional data: links to wikipedia pages in multiple languages, images, life data, nationality and affiliations of the according persons, and much more. wikidata item for the indian economist bina agarwal, visualized via the sqid browser wikidata for authorities wikidata   authority control   linked data   read more about economists in wikidata: opportunities of authority linking log in or register to post comments integrating a research data repository with established research practices - - by timo borst authors: timo borst, konstantin ott in recent years, repositories for managing research data have emerged, which are supposed to help researchers to upload, describe, distribute and share their data. to promote and foster the distribution of research data in the light of paradigms like open science and open access, these repositories are normally implemented and hosted as stand-alone applications, meaning that they offer a web interface for manually uploading the data, and a presentation interface for browsing, searching and accessing the data. sometimes, the first component (interface for uploading the data) is substituted or complemented by a submission interface from another application. e.g., in dataverse or in ckan data is submitted from remote third-party applications by means of data deposit apis [ ]. however the upload of data is organized and eventually embedded into a publishing framework (data either as a supplement of a journal article, or as a stand-alone research output subject to review and release as part of a ‘data journal’), it definitely means that this data is supposed to be made publicly available, which is often reflected by policies and guidelines for data deposit. institutional view on research data read more about integrating a research data repository with established research practices log in or register to post comments pages next › last » tags in dbpedia - web taxonomy your browser does not support canvas. application programming interface authority control drupal economics electronic publishing impact factor linked data organizer recommender system repository (publishing) thesaurus wikidata search form search (rdf/xml, turtle, nt)   imprint   privacy powered by drupal dshr's blog: mempool flooding dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, june , mempool flooding in unstoppable code? i discussed joe kelly's suggestion for how governments might make it impossible to transact bitcoin by mounting a % attack using seized mining rigs. that's not the only way to achieve the same result, so below the fold i discuss an alternative approach that could be used alone or in combination with kelly's concept. the lifecycle of the transaction the goal is to prevent transactions in a cryptocurrency based on a permissionless blockchain. we need to understand how transactions are supposed to work in order to establish their attack surface: transactions transfer cryptocurrency between inputs and outputs identified by public keys (or typically hashes of the keys). the input creates a proposed transaction specifying the amount for each output, and a miner fee, then signs it with their private key. the proposed transaction is broadcast to the mining pools, typically by what amounts to a gossip protocol. mining pools validate the proposed transactions they receive and add them to a database of proposed transactions, typically called the "mempool". when a mining pool starts trying to mine a block, they choose some of the transactions from their mempool to include in it. typically, they choose transactions (or sets of dependent transactions) that yield the highest fee should their block win. once a transaction is included in a winning block, or more realistically in a sequence of winning blocks, it is final. attack surface my analysis of the transaction lifecycle's attack surface may well not be complete, but here goes: the security of funds before a proposed transaction depends upon the private key remaining secret. the darkside ransomware group lost part of their takings from the colonial pipeline compromise because the fbi knew the private key of one of their wallets. the gossip protocol makes proposed transactions public. a public "order book" is necessary because the whole point of a permissionless blockchain is to avoid the need for trust between particpants. this leads to the endemic front-running i discussed in the order flow, and which naz automated (see how to front-run in ethereum). the gossip protocol is identifiable traffic, which isps could be required to block. the limited blocksize and fixed block time limit the rate at which transactions can leave the mempool. thus when the transaction demand exceeds this rate the mempool will grow. mining pools have limited storage for their mempools. when the limit is reached mining pools will drop less-profitable transactions from their mempools. like any network service backed by a limited resource, the mempool is vulnerable to a distributed denial of service (ddos) attack. each mining pool is free to choose transactions to include in the blocks they try to mine at will. thus a transaction need not appear in the mempool to be included in a block. for example, mining pools' own transactions or those of their friends could avoid the mempool, the equivalent of "dark pools" in equity markets. once a transaction is included in a mined block, it is vulnerable to a % attack. flooding the mempool source lets focus on the idea of ddos-ing the mempool. as john lewis of the bank of england wrote in 's the seven deadly paradoxes of cryptocurrency: bitcoin has an estimated maximum of transactions per second vs , for visa. more transactions competing to get processed creates logjams and delays. transaction fees have to rise in order to eliminate the excess demand. so bitcoin’s high transaction cost problem gets worse, not better, as transaction demand expands. worse, pending transactions are in a blind auction to be included in the next block. because users don't know how much to bid to be included, they either overpay, or suffer a long delay or possibly fail completely. the graph shows this effect in practice. as the price of bitcoin crashed on may th and hodl-ers rushed to sell, the average fee per transaction spiked to over $ . the goal of the attack is to make victims' transactions rare, slow and extremely expensive by flooding the mempool with attackers' transactions. cryptocurrencies have no intrinsic value, their value is determined by what the greater fool will pay. if hodl-ers find it difficult and expensive to unload their hodl-ings, and traders find it difficult and expensive to trade, the "price" of the currency will decrease. this attack isn't theoretical, it has already been tried. for example, in june bitcoin exchange guide reported: what appears to be happening is a bunch (possibly super spam) of satoshi transactions (smallest unit in bitcoin) which will put a decent stress test if sustained. some are saying near , spam transactions and counting. this is obviously not an effective attack. there is no incentive for the mining pools to prefer tiny unprofitable transactions over normal user transactions. unless it were combined with a % attack, an effective flooding attack needs to incentivize mining pools who are not part of the attack to prefer the attackers' transactions to those of victims. the only way to do this is to make the attacker's transactions more profitable, which means they have to come with large fees. if a major government wanted to mount a flooding attack on, for example, bitcoin they would need a lot of bitcoin as ammunition. fortunately, at least the us government has seized hundreds of millions of dollars of cryptocurrencies: mr. raimondi of the justice department said the colonial pipeline ransom seizure was the latest sting operation by federal prosecutors to recoup illicitly gained cryptocurrency. he said the department has made “many seizures, in the hundreds of millions of dollars, from unhosted cryptocurrency wallets” used for criminal activity. if they needed more, they could always hack one of the numerous vulnerable exchanges. with this ammunition the government could generate huge numbers of one-time addresses and huge numbers of valid transactions among them with fees large enough to give them priority. the result would be to bid up the bitcoin fee necessary for victim transactions to get included in blocks. it would be hard for mining pools to identify the attackers' transactions, as they would be valid and between unidentifiable addresses. as the attack continued this would ensure that: the minimum size of economically feasible transactions would increase, restricting trading to larger and larger hodl-ers, or to exchanges. the visible fact that bitcoin was under sustained, powerful attack would cause hodl-ers to sell for fiat or other cryptocurrencies. this would depress the "price" of bitcoin, as the exchanges would understand the risk that the attack would continue and further depress the price. mining pools, despite receiving their normal rewards plus increased fees in bitcoin, would suffer reduction of their income in fiat terms. further, the mining pools need transactions to convert their rewards and fees to fiat to pay for power, etc. with transactions scarce and expensive, and reduced fiat income, the hash rate would decline, making a % attack easier. how feasible are flooding attacks? back on may th, as the bitcoin "price" crashed to $ k, its blockchain was congested and average fees spiked to $ . clearly, the distribution of fees would have been very skewed, with a few fees well above $ and most well below; the median fee was around $ . fees are measured in satoshi, - of a btc, so at that time the average fee was / * btc or * satoshi. lets assume that ensuring that no transactions with less than * satoshi as a fee succeed is enough to keep the blockchain congested. lets assume that when the feds claim to have seized hundreds of millions of dollars of cryptocurrencies they mean $ * or * / * btc or . * satoshi. that would be enough to pay the * satoshi for * transactions. at transaction/second that would keep the blockchain congested for nearly days or nearly months. in practice, the attack would last much longer, since the attackers could dynamically adjust the fees they paid to keep the blockchain congested as, inevitably, the demand for transactions from victims declined as they realised it was futile. ensuring that almost no victim transactions succeeded for months would definitely greatly reduce the btc "price". thus the , btc the mining pools would have earned in fees, plus the , btc they would have earned in block rewards during that time would be worth much less than the $ . b they would represent at a $ k "price". funding the mining pools is a downside of this attack, but the increment is only about % in btc terms, so likely to be swamped by the decrease in fiat terms. potential defenses blockchain advocates argue that one of the benefits of the decentralization they claim for the technology is "censorship resistance". this is a problem for them because defending against a mempool flooding attack requires censorship. the mining pools need to identify and censor (i.e. drop) the attackers' transactions. fortunately for the advocates, the technology is not actually decentralized ( - mining pools have dominated the hash rate for the last years), so does not actually provide "censorship resistance". the pools could easily conspire to implement the necessary censorship. unfortunately for the advocates, the attackers would be flooding with valid transactions offering large fees, so the pools would find it hard to, and not be motivated to, selectively drop them. , btc is only about half of tesla's hodl-ings, so it would be possible for a whale, or a group of whales, to attempt to raise the cost of the attack, or equivalently reduce its duration, by mounting a simultaneous flood themselves. the attackers would respond by reducing their flood, since the whales were doing their job for them. this would be expensive for the whales and wouldn't be an effective defense. since it is possible for mining pools to include transactions in blocks they mine, and the attack would render the mempool effectively useless, one result of the attack would be to force exchanges and whales to establish "dark pool" type direct connections to the mining pools, allowing the mining pools to ignore the mempool and process transactions only from trusted addresses. this would destroy the "decentralized" myth, completing the transition of the blockchain into a permissioned one run by the pools, and make legal attacks on the exchanges an effective weapon. also, the mining pools would be vulnerable to government-controlled "trojan horse" exchanges, as the bad guys were to anom encrypted messaging. conclusion if my analysis is correct, if would be feasible for a major government to mount a mempool flooding attack that would seriously disrupt, but not totally destroy bitcoin and, by extension other cryptocurrencies. the attack would amplify the effect of using seized mining power as i discussed on unstoppable code?. interestingly, the mempool flooding attack is effective irrespective of the consensus mechanism underlying the cryptocurrency. it depends only upon a public means of submitting transactions. posted by david. at : am labels: bitcoin no comments: post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ▼  june ( ) taleb on cryptocurrency economics china's cryptocurrency crackdown dna data storage: a different approach mining is money transmission (updated) mempool flooding meta: apology to commentors unreliability at scale unstoppable code? ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. bitcoin price today, btc live marketcap, chart, and info | coinmarketcap cryptos:   , exchanges:   market cap:  $ , , , , h vol:  $ , , , dominance:  btc: . % eth: . %eth gas:   gwei cryptocurrencies exchanges nft portfolio watchlist calendars products learn cryptos:   , exchanges:   market cap:  $ , , , , h vol:  $ , , , dominance:  btc: . % eth: . %eth gas:   gwei cryptocurrenciescoinsbitcoin bitcoinbtc rank # coin on , , watchlists bitcoin price (btc) $ , . . % . eth . % low:$ , . high:$ , . h   bitcoin btcprice: $ , .   . % add to main watchlist   market cap $ , , , . % fully diluted market cap $ , , , . % volume h $ , , , . % volume / market cap . circulating supply , , . btc % max supply , , total supply , , more stats buyexchangegamingearn crypto sponsored links website, explorers, socials etc. bitcoin.org explorers community source code whitepaper bitcoin links links bitcoin.org source code whitepaper explorers blockchain.coinmarketcap.com blockchain.info live.blockcypher.com blockchair.com explorer.viabtc.com community bitcointalk.org reddit tags mineable pow + mineable pow sha- store of value view all bitcoin tags consensus algorithm pow sha- property store of value state channels coinbase ventures portfolio three arrows capital portfolio polychain capital portfolio binance labs portfolio arrington xrp capital blockchain capital portfolio boostvc portfolio cms holdings portfolio dcg portfolio dragonfly capital portfolio electric capital portfolio fabric ventures portfolio framework ventures galaxy digital portfolio huobi capital alameda research portfolio a z portfolio confirmation portfolio winklevoss capital usv portfolio placeholder ventures portfolio pantera capital portfolio multicoin capital portfolio paradigm xzy screener other mineable overviewmarkethistorical dataholderswalletsnewssocialsratingsanalysisprice estimatesshare bitcoin chart loading data please wait, we are loading chart data btc price live data the live bitcoin price today is $ , . usd with a -hour trading volume of $ , , , usd. bitcoin is down . % in the last hours. the current coinmarketcap ranking is # , with a live market cap of $ , , , usd. it has a circulating supply of , , btc coins and a max. supply of , , btc coins. if you would like to know where to buy bitcoin, the top exchanges for trading in bitcoin are currently binance, tokocrypto, okex, cointiger, and bybit. you can find others listed on our crypto exchanges page. what is bitcoin (btc)? bitcoin is a decentralized cryptocurrency originally described in a whitepaper by a person, or group of people, using the alias satoshi nakamoto. it was launched soon after, in january . bitcoin is a peer-to-peer online currency, meaning that all transactions happen directly between equal, independent network participants, without the need for any intermediary to permit or facilitate them. bitcoin was created, according to nakamoto’s own words, to allow “online payments to be sent directly from one party to another without going through a financial institution.” some concepts for a similar type of a decentralized electronic currency precede btc, but bitcoin holds the distinction of being the first-ever cryptocurrency to come into actual use. who are the founders of bitcoin? bitcoin’s original inventor is known under a pseudonym, satoshi nakamoto. as of , the true identity of the person — or organization — that is behind the alias remains unknown. on october , , nakamoto published bitcoin’s whitepaper, which described in detail how a peer-to-peer, online currency could be implemented. they proposed to use a decentralized ledger of transactions packaged in batches (called “blocks”) and secured by cryptographic algorithms — the whole system would later be dubbed “blockchain.” just two months later, on january , , nakamoto mined the first block on the bitcoin network, known as the genesis block, thus launching the world’s first cryptocurrency. however, while nakamoto was the original inventor of bitcoin, as well as the author of its very first implementation, over the years a large number of people have contributed to improving the cryptocurrency’s software by patching vulnerabilities and adding new features. bitcoin’s source code repository on github lists more than contributors, with some of the key ones being wladimir j. van der laan, marco falke, pieter wuille, gavin andresen, jonas schnelli and others. what makes bitcoin unique? bitcoin’s most unique advantage comes from the fact that it was the very first cryptocurrency to appear on the market. it has managed to create a global community and give birth to an entirely new industry of millions of enthusiasts who create, invest in, trade and use bitcoin and other cryptocurrencies in their everyday lives. the emergence of the first cryptocurrency has created a conceptual and technological basis that subsequently inspired the development of thousands of competing projects. the entire cryptocurrency market — now worth more than $ billion — is based on the idea realized by bitcoin: money that can be sent and received by anyone, anywhere in the world without reliance on trusted intermediaries, such as banks and financial services companies. thanks to its pioneering nature, btc remains at the top of this energetic market after over a decade of existence. even after bitcoin has lost its undisputed dominance, it remains the largest cryptocurrency, with a market capitalization that fluctuated between $ -$ billion in , owing in large part to the ubiquitousness of platforms that provide use-cases for btc: wallets, exchanges, payment services, online games and more. related pages: looking for market and blockchain data for btc? visit our block explorer. want to buy bitcoin? use coinmarketcap’s guide. should you buy bitcoin with paypal? what is wrapped bitcoin? will bitcoin volatility ever reduce? how to use a bitcoin atm how much bitcoin is in circulation? bitcoin’s total supply is limited by its software and will never exceed , , coins. new coins are created during the process known as “mining”: as transactions are relayed across the network, they get picked up by miners and packaged into blocks, which are in turn protected by complex cryptographic calculations. as compensation for spending their computational resources, the miners receive rewards for every block that they successfully add to the blockchain. at the moment of bitcoin’s launch, the reward was bitcoins per block: this number gets halved with every , new blocks mined — which takes the network roughly four years. as of , the block reward has been halved three times and comprises . bitcoins. bitcoin has not been premined, meaning that no coins have been mined and/or distributed between the founders before it became available to the public. however, during the first few years of btc’s existence, the competition between miners was relatively low, allowing the earliest network participants to accumulate significant amounts of coins via regular mining: satoshi nakamoto alone is believed to own over a million bitcoin. mining bitcoins can be very profitable for miners, depending on the current hash rate and the price of bitcoin. while the process of mining bitcoins is complex, we discuss how long it takes to mine one bitcoin on cmc alexandria — as we wrote above, mining bitcoin is best understood as how long it takes to mine one block, as opposed to one bitcoin. how is the bitcoin network secured? bitcoin is secured with the sha- algorithm, which belongs to the sha- family of hashing algorithms, which is also used by its fork bitcoin cash (bch), as well as several other cryptocurrencies. what is bitcoin’s role as a store of value? bitcoin is the first decentralized, peer-to-peer digital currency. one of its most important functions is that it is used as a decentralized store of value. in other words, it provides for ownership rights as a physical asset or as a unit of account. however, the latter store-of-value function has been debated. many crypto enthusiasts and economists believe that high-scale adoption of the top currency will lead us to a new modern financial world where transaction amounts will be denominated in smaller units. the top crypto is considered a store of value, like gold, for many — rather than a currency. this idea of the first cryptocurrency as a store of value, instead of a payment method, means that many people buy the crypto and hold onto it long-term (or hodl) rather than spending it on items like you would typically spend a dollar — treating it as digital gold. crypto wallets the most popular wallets for cryptocurrency include both hot and cold wallets. cryptocurrency wallets vary from hot wallets and cold wallets. hot wallets are able to be connected to the web, while cold wallets are used for keeping large amounts of coins outside of the internet. some of the top crypto cold wallets are trezor, ledger and coolbitx. some of the top crypto hot wallets include exodus, electrum and mycelium. how is bitcoin’s technology upgraded? a hard fork is a radical change to the protocol that makes previously invalid blocks/transactions valid, and therefore requires all users to upgrade. for example, if users a and b are disagreeing on whether an incoming transaction is valid, a hard fork could make the transaction valid to users a and b, but not to user c. a hard fork is a protocol upgrade that is not backward compatible. this means every node (computer connected to the bitcoin network using a client that performs the task of validating and relaying transactions) needs to upgrade before the new blockchain with the hard fork activates and rejects any blocks or transactions from the old blockchain. the old blockchain will continue to exist and will continue to accept transactions, although it may be incompatible with other newer bitcoin clients. a soft fork is a change to the bitcoin protocol wherein only previously valid blocks/transactions are made invalid. since old nodes will recognise the new blocks as valid, a soft fork is backward-compatible. this kind of fork requires only a majority of the miners upgrading to enforce the new rules. some examples of prominent cryptocurrencies that have undergone hard forks are the following: bitcoin’s hard fork that resulted in bitcoin cash, ethereum’s hard fork that resulted in ethereum classic. https://coinmarketcap.com/alexandria/article/bitcoin-vs-bitcoin-cash-vs-bitcoin-svbitcoin cash has been hard forked since its original forking, with the creation of bitcoin sv. what is the lightning network? the lightning network is an off-chain, layered payment protocol that operates bidirectional payment channels which allows instantaneous transfer with instant reconciliation. it enables private, high volume and trustless transactions between any two parties. the lightning network scales transaction capacity without incurring the costs associated with transactions and interventions on the underlying blockchain. how much is bitcoin? the current valuation of bitcoin is constantly moving, all day every day. it is a truly global asset. from a start of under one cent per coin, btc has risen in price by thousands of percent to the numbers you see above. the prices of all cryptocurrencies are quite volatile, meaning that anyone’s understanding of how much is bitcoin will change by the minute. however, there are times when different countries and exchanges show different prices and understanding how much is bitcoin will be a function of a person’s location. where can you buy bitcoin (btc)? bitcoin is, in many regards, almost synonymous with cryptocurrency, which means that you can buy bitcoin on virtually every crypto exchange — both for fiat money and other cryptocurrencies. some of the main markets where btc trading is available are: binance coinbase pro okex kraken huobi global bitfinex if you are new to crypto, use coinmarketcap’s own easy guide to buying bitcoin. our most recent articles about bitcoin: the market is picking up, and macro factors dominates the crypto market: weekly review from tokeninsight what is the crypto fear and greed index? how regulations may affect the mining industry: a data perspective by intotheblock who owns bitcoin? what’s russia up to? a weekly russian crypto news recap read more btc bitcoin usd united states dollar btc price statistics bitcoin price today bitcoin price $ , . price change h $- . . % h low / h high $ , . / $ , . trading volume h $ , , , . . % volume / market cap . market dominance . % market rank # bitcoin market cap market cap $ , , , . . % fully diluted market cap $ , , , . . % bitcoin price yesterday yesterday's low / high $ , . / $ , . yesterday's open / close $ , . / $ , . yesterday's change . % yesterday's volume $ , , , . bitcoin price history d low / d high $ , . / $ , . d low / d high $ , . / $ , . d low / d high $ , . / $ , . week low / week high $ , . / $ , . all time high apr , ( months ago) $ , . . % all time low jul , ( years ago) $ . . % bitcoin roi . % bitcoin supply circulating supply , , btc total supply , , btc max supply , , btc show more trending coins and tokens 🔥 hold bnb on binanceand get % off trading fees. trade nowsponsored pornrocketpornrocket# cumrocketcummies# polyplayplay# minifootballminifootball# minamina# people also watch reserve rights $ . - . % microbitcoin $ . . % typerium $ . . % dfi.money $ , . - . % ionchain $ . - . % ink protocol $ . . % products blockchain explorer crypto api crypto indices interest jobs board sitemap company about us terms of use privacy policy disclaimer methodology careerswe’re hiring! support request form contact support faq glossary socials facebook twitter telegram instagram interactive chat © coinmarketcap. all rights reserved rails adds activesupport::parameterfilter | saeloun blog all articles categories contact conference rails adds activesupport::parameterfilter dec , , by romil mehta minute read there are cases when we do not want sensitive data like passwords, card details etc in log files. rails provides filter_parameters to achive this. for example, if we have to filter secret_code of user then we need to set filter_parameters in the application.rb as below: config.filter_parameters += ["secret_code"] after sending request to server, our request parameters will look like these: parameters: {"authenticity_token"=>"zkeyrytddqybjghm+zzicqvrku/ketthikmhsfq/ mq/egmijkelhypgvvabag or+fn ta qk prozdotaka==", "user"=>{"first_name"=>"first name", "last_name"=>"last name", "email"=>"abc@gmail.com", "password"=>"[filtered]", "password_confirmation"=>"[filtered]", "secret_code"=>"[filtered]"}, "commit"=>"create user"} now if we do user.last then: > user.last #=> # we can see that the secret_code of user is not filtered and visible. rails has moved paramterfilter from actiondispatch to activesupport to solve above security problem. in rails > user.last #=> # now we can see that secret_code is filtered. instead of defining as filter_parameters, we can also define attributes as filter_attributes. > user.filter_attributes = [:secret_code, :password] #=> [:secret_code, :password] > user.last #=> # if we have filter_attributes or filter_parameters in regex or proc form, rails has added support for that also. > user.filter_attributes = [/name/, :secret_code, :password] #=> [/name/, :secret_code, :password] > user.last #=> # share this post! if you enjoyed this post, you might also like: rails - action mailbox tryout november , rails adds disable_joins: true option to has_many :through association may , rails adds disable_joins: true option to has_one :through association june , none
    warning: "continue" targeting switch is equivalent to "break". did you mean to use "continue "? in /kunden/ _ /jakoblog.de/wp/wp-content/plugins/mendeleyplugin/wp-mendeley.php on line

    warning: cannot modify header information - headers already sent by (output started at /kunden/ _ /jakoblog.de/wp/wp-content/plugins/mendeleyplugin/wp-mendeley.php: ) in /kunden/ _ /jakoblog.de/wp/wp-includes/feed-atom.php on line
    en &# ; jakoblog das weblog von jakob voß - - t : : z http://jakoblog.de/feed/atom/ wordpress jakob <![cdata[data models age like parents]]> http://jakoblog.de/?p= - - t : : z - - t : : z denny vrandečić, employed as ontologist at google, noticed that all six of of six linked data applications linked to years ago (iwb, tabulator, disko, marbles, rdfbrowser , and zitgist) have disappeared or changed their calling syntax. this reminded me at a proverb about software and data:

    software ages like fish, data ages like wine.


    the original form of this saying seems to come from james governor (@monkchips) who in derived it from from an earlier phrase:

    hardware is like fish, operating systems are like wine.

    the analogy of fishy applications and delightful data has been repeated and explained and criticized several times. i fully agree with the part about software rot but i doubt that data actually ages like wine (i&# ;d prefer whisky anyway). a more accurate simile may be &# ;data ages like things you put into your crowded cellar and then forget about&# ;.

    thinking a lot about data i found that data is less interesting than the structures and rules that shape and restrict data: data models, ontologies, schemas, forms etc. how do they age compared with software and data? i soon realized:

    data models age like parents.

    first they guide you, give good advise, and support you as best as they can. but at some point data begin to rebel against their models. sooner or later parents become uncool, disconnected from current trends, outdated or even embarrassing. eventually you have to accept their quaint peculiarities and live your own life. that&# ;s how standards proliferate. both ontologies and parents ultimately become weaker and need support. and in the end you have to let them go, sadly looking back.

    (the analogy could further be extended, for instance data models might be frustrated confronted by how actual data compares to their ideals, but that&# ;s another story)

    ]]>
    jakob <![cdata[wikidata documentation on the hackathon in vienna]]> http://jakoblog.de/?p= - - t : : z - - t : : z at wikimedia hackathon , a couple of volunteers sat together to work on the help pages of wikidata. as part of that wikidata documentation sprint. ziko and me took a look at the wikidata glossary. we identified several shortcomings and made a list of rules how the glossary should look like. the result are the glossary guidelines. where the old glossary partly replicated wikidata:introduction, the new version aims to allow quick lookup of concepts. we already rewrote some entries of the glossary according to these guidelines but several entries are outdated and need to be improved still. we changed the structure of the glossary into a sortable table so it can be displayed as alphabetical list in all languages. the entries can still be translated with the translation system (it took some time to get familiar with this feature).

    we also created some missing help pages such as help:wikimedia and help:wikibase to explain general concepts with regard to wikidata. some of these concepts are already explained elsewhere but wikidata needs at least short introductions especially written for wikidata users.

    image taken by andrew lih (cc-by-sa)

    ]]>
    jakob <![cdata[introduction to phabricator at wikimedia hackathon]]> http://jakoblog.de/?p= - - t : : z - - t : : z this weekend i participate at wikimedia hackathon in vienna. i mostly contribute to wikidata related events and practice the phrase "long time no see", but i also look into some introductionary talks.

    in the late afternoon of day one i attended an introduction to phabricator project management tool given by andré klapper. phabricator was introduced in wikimedia foundation about three years ago to replace and unify bugzilla and several other management tools.

    phabricator is much more than an issue tracker for software projects (although it is mainly used for this purpose by wikimedia developers). in summary there are tasks, projects, and teams. tasks can be tagged, assigned, followed,discussed, and organized with milestones and workboards. the latter are kanban-boards like those i know from trello, waffle, and github project boards.

    phabricator is open source so you can self-host it and add your own user management without having to pay for each new user and feature (i am looking at you, jira). internally i would like to use phabricator but for fully open projects i don&# ;t see enough benefit compared to using github.

    p.s.: wikimedia hackathon is also organized with phabricator. there is also a task for blogging about the event.

    ]]>
    jakob <![cdata[some thoughts on iiif and metadata]]> http://jakoblog.de/?p= - - t : : z - - t : : z yesterday at dini ag kim workshop i martin baumgartner and stefanie rühle gave an introduction to the international image interoperability framework (iiif) with focus on metadata. i already knew that iiif is a great technology for providing access to (especially large) images but i had not have a detailed look yet. the main part of iiif is its image api and i hope that all major media repositories (i am looking at you, wikimedia commons) will implement it. in addition the iiif community has defined a &# ;presentation api&# ;, a &# ;search api&# ;, and an &# ;authentication api&# ;. i understand the need of such additional apis within the iiif community, but i doubt that solving the underlying problems with their own standards (instead of reusing existing standards) is the right way to go. standards should better &# ;do one thing and do it well&# ; (unix philosophy). if images are the &# ;one thing&# ; of iiif, then search and authentication are different matter.

    in the workshop we only looked at parts of the presentation api to see where metadata (creator, dates, places, provenance etc. and structural metadata such as lists and hierarchies) could be integrated into iiif. such metadata is already expressed in many other formats such as mets/mods and tei so the question is not whether to use iiif or other metadata standards but how to connect iiif with existing metadata standards. a quick look at the presentation api surprised me to find out that the metadata element is explicitly not intended for additional metadata but only &# ;to be displayed to the user&# ;. the element contains an ordered list of key-value pairs that &# ;might be used to convey the author of the work, information about its creation, a brief physical description, or ownership information, amongst other use cases&# ;. at the same time the standard emphasizes that &# ;there are no semantics conveyed by this information&# ;. hello, mcfly? without semantics conveyed it isn&# ;t information! in particular there is no such thing as structured data (e.g. a list of key-value pairs) without semantics.

    i think the design of field metadata in iiif is based on a common misconception about the nature of (meta)data, which i already wrote about elsewhere (sorry, german article &# ; some background in my phd and found by ballsun-stanton).

    in a short discussion at twitter rob sanderson (getty) pointed out that the data format of iiif presentation api to describe intellectual works (called a manifest) is expressed in json-ld, so it can be extended by other rdf statements. for instance the field &# ;license&# ; is already defined with dcterms:rights. addition of a field &# ;author&# ; for dcterms:creator only requires to define this field in the json-ld @context of a manifest. after some experimenting i found a possible way to connect the &# ;meaningless&# ; metadata field with json-ld fields:

     {
     "@context": [
     "http://iiif.io/api/presentation/ /context.json",
     { 
     "author": "http://purl.org/dc/terms/creator",
     "bibo": "http://purl.org/ontology/bibo/"
     }
     ],
     "@id": "http://example.org/iiif/book /manifest",
     "@type": ["sc:manifest", "bibo:book"],
     "metadata": [
     {
     "label": "author",
     "property": "http://purl.org/dc/terms/creator",
     "value": "allen smithee"
     },
     { 
     "label": "license",
     "property": "http://purl.org/dc/terms/license", 
     "value": "cc-by . " 
     }
     ],
     "license": "http://creativecommons.org/licenses/by/ . /",
     "author": {
     "@id": "http://www.wikidata.org/entity/q ",
     "label": "allen smithee"
     }
     }
     

    this solution requires an additional element property in the iiif specification to connect a metadata field with its meaning. iiif applications could then enrich the display of metadata fields for instance with links or additional translations. in json-ld some names such as &# ;cc-by . &# ; and &# ;allen smithee&# ; need to be given twice, but this is ok because normal names (in contrast to field names such as &# ;author&# ; and &# ;license&# ;) don&# ;t have semantics.

    ]]>
    jakob <![cdata[abbreviated uris with rdfns]]> http://jakoblog.de/?p= - - t : : z - - t : : z working with rdf and uris can be annoying because uris such as &# ;http://purl.org/dc/elements/ . /title&# ; are long and difficult to remember and type. most rdf serializations make use of namespace prefixes to abbreviate uris, for instance &# ;dc&# ; is frequently used to abbreviate &# ;http://purl.org/dc/elements/ . /&# ; so &# ;http://purl.org/dc/elements/ . /title&# ; can be written as qualified name &# ;dc:title&# ;. this simplifies working with uris, but someone still has to remember mappings between prefixes and namespaces. luckily there is a registry of common mappings at prefix.cc.

    a few years ago i created the simple command line tool rdfns and a perl library to look up uri namespace/prefix mappings. meanwhile the program is also available as debian and ubuntu package librdf-ns-perl. the newest version (not included in debian yet) also supports reverse lookup to abbreviate an uri to a qualified name. features of rdfns include:

    look up namespaces (as rdf/turtle, rdf/xml, sparql&# ;)

     $ rdfns foaf.ttl foaf.xmlns dbpedia.sparql foaf.json
     
     @prefix foaf:  .
     xmlns:foaf="http://xmlns.com/foaf/ . /"
     prefix dbpedia: 
     "foaf": "http://xmlns.com/foaf/ . /"
     

    expand a qualified name

     $ rdfns dc:title
     
     http://purl.org/dc/elements/ . /title
     

    lookup a preferred prefix

     $ rdfns http://www.w .org/ / /geo/wgs _pos#
     
     geo
     

    create a short qualified name of an url

     $ rdfns http://purl.org/dc/elements/ . /title
     
     dc:title
     

    i use rdf-ns for all rdf processing to improve readability and to avoid typing long uris. for instance catmandu::rdf can be used to parse rdf into a very concise data structure:

     $ catmandu convert rdf --file rdfdata.ttl to yaml
     
    ]]>
    jakob <![cdata[testing command line apps with app::cmd]]> http://jakoblog.de/?p= - - t : : z - - t : : z this posting has also been published at blogs.perl.org.

    ricardo signes&# ; app::cmd has been praised a lot so i gave it a try for my recent command line app. in summary, the module is great although i missed some minor features and documentation (reminder to all: if you miss some feature in a cpan module, don&# ;t create yet another module but try to improve the existing one!). one feature i like a lot is how app::cmd facilitates writing tests for command line apps. after having written a short wrapper around app::cmd::tester my formerly ugly unit tests look very simple and clean. have a look at this example:

     use test::more;
     use app::paia::tester;
     
     new_paia_test;
     
     paia qw(config);
     is stdout, "{}\n";
     is error, undef;
     
     paia qw(config -c x.json --verbose);
     is error, "failed to open config file x.json\n";
     ok exit_code; 
     
     paia qw(config --config x.json --verbose foo bar);
     is output, "# saved config file x.json\n";
     
     paia qw(config foo bar);
     paia qw(config base http://example.org/);
     is exit_code, ;
     is output, '';
     
     paia qw(config);
     is_deeply stdout_json, { 
     base => 'http://example.org/',
     foo => 'bar',
     }, "get full config"
     
     done_paia_test;
     

    the application is called paia &# ; that&# ;s how it called at command line and that&# ;s how it is simply called as function in the tests. the wrapper class (here: app::paia::tester) creates a singleton app::cmd::tester::result object and exports its methods (stdout, stderr, exit_code&# ;). this alone makes the test much more readable. the wrapper further exports two methods to set up a testing environment (new_paia_test) and to finish testing (done_paia_test). in my case the setup creates an empty temporary directory, other applications might clean up environment variables etc. depending on your application you might also add some handy functions like stdout_json to parse the app&# ;s output in a form that can better be tested.

    ]]>
    jakob <![cdata[my phd thesis about data]]> http://jakoblog.de/?p= - - t : : z - - t : : z

    i have finally received paper copies of my phd thesis &# ;describing data patterns&# ;, published and printed via createspace. the full pdf has already been archived as cc-by-sa, but a paper print may still be nice and more handy (it&# ;s printed as small paperback instead of the large a -pdf). you can get a copy for . € or . € via amazon (isbn - - - ).

    i also set up a little website at aboutdata.org. the site contains an html view of the pattern language that i developed as one result of the thesis.

    i am sorry for not having written the thesis in pandoc markdown but in latex (source code available at github), so there is no epub/html version.

    ]]>
    jakob <![cdata[on the way to a library ontology]]> http://jakoblog.de/?p= - - t : : z - - t : : z i have been working for some years on specification and implementation of several apis and exchange formats for data used in, and provided by libraries. unfortunately most existing library standards are either fuzzy, complex, and misused (such as marc ), or limited to bibliographic data or authority data, or both. libraries, however, are much more than bibliographic data &# ; they involve library patrons, library buildings, library services, library holdings, library databases etc.

    during the work on formats and apis for these parts of library world, patrons account information api (paia) being the newest piece, i found myself more and more on the way to a whole library ontology. the idea of a library ontology started in (now moved to this location) but designing such a broad data model from bottom would surely have lead to yet another complex, impractical and unused library standard. meanwhile there are several smaller ontologies for parts of the library world, to be combined and used as linked open data.

    in my opinion, ontologies, rdf, semantic web, linked data and all the buzz is is overrated, but it includes some opportunities for clean data modeling and data integration, which one rarely finds in library data. for this reason i try to design all apis and formats at least compatible with rdf. for instance the document availability information api (daia), created in (and now being slightly redesigned for version . ) can be accessed in xml and in json format, and both can fully be mapped to rdf. other micro-ontologies include:

    • document service ontology (dso) defines typical document-related services such as loan, presentation, and digitization
    • simple service status ontology (ssso) defines a service instance as kind of event that connects a service provider (e.g. a library) with a service consumer (e.g. a library patron). ssso further defines typical service status (e.g. reserved, prepared, executed&# ;) and limitations of a service (e.g. a waiting queue or a delay
    • patrons account information api (paia) will include a mapping to rdf to express basic patron information, fees, and a list of current services in a patron account, based on ssso and dso.
    • document availability information api (daia) includes a mapping to rdf to express the current availability of library holdings for selected services. see here for the current draft.
    • a holdings ontology should define properties to relate holdings (or parts of holdings) to abstract documents and editions and to holding institutions.
    • gbv ontology contains several concepts and relations used in gbv library network that do not fit into other ontologies (yet).
    • one might further create a database ontology to describe library databases with their provider, extent apis etc. &# ; right now we use the gbv ontology for this purpose. is there anything to reuse instead of creating just another ontology?!

    the next step will probably creation of a small holdings ontology that nicely fits to the other micro-ontologies. this ontology should be aligned or compatible with the bibframe initiative, other ontologies such as schema.org, and existing holding formats, without becoming too complex. the german initiative dini-kim has just launched a a working group to define such holding format or ontology.

    ]]>
    jakob <![cdata[dead end electronic resource citation (erc)]]> http://jakoblog.de/?p= - - t : : z - - t : : z tidying up my phd notes, i found this short rant about &# ;electronic resource citation&# ;. i have not used it anywhere, so i publish it here, licensed under cc-by-sa.

    electronic resource citation (erc) was introduced by john kunze with a presentation at the international conference on dublin core and metadata applications and with a paper in the journal of digital information, vol. , no ( ). kunze cited his paper in a call for an erc interest group within the dublin core metadata initiative (dcmi) at the perl lib mailing list, giving the following example of an erc:

     erc: kunze, john a. | a metadata kernel for electronic permanence
     | | http://jodi.ecs.soton.ac.uk/articles/v /i /kunze/
     

    an erc is a minimal &# ;kernel&# ; metadata record that consist of four elements: who, what, when and where. in the given example they are:

     who: kunze, john a.
     what: a metadata kernel for electronic permanence
     when: 
     where: http://jodi.ecs.soton.ac.uk/articles/v /i /kunze/
     

    ironically the given url is obsolete, the host &# ;jodi.ecs.soton.ac.uk&# ; does not even exist anymore. the erc is pretty useless if it just uses a fragile url to cite a resource. how about some value that does not change over time, e.g:

     where: journal of digital information, volume issue 
     

    as erc is defined as &# ;a location or machine-oriented identifier&# ;, one could also use stable identifiers:

     where: issn - , article no. 
     

    both issn and article numbers are much more identifiers then urls. citing an url is more like

     where: at the desk in the little reading room of my library
     

    by the way the current location is http://www.rice.edu/perl lib/archives/ - /msg .html &# ; but who knows whether texas a&# ;m university will still host the journal at this url in years?

    there are some interesting ideas in the original erc proposal (different kinds of missing values, temper date values, the four questions etc.), but its specification and implementation are just ridiculous and missing references to current technology (you know that you are doing something wrong in specification if you start to define your own encodings for characters, dates etc. instead of concentrating to your core subject and refering to existing specifications for the rest). the current draft ( ) is a typical example of badly mixing modeling and encoding issues and of loosing touch with existing, established data standards.

    in addition to problems at the &# ;low level&# ; of encoding, the &# ;high level&# ; of conceptual modeling lacks appropriate references. what about the relation of erc concepts to models such as frbr and cidoc-crm? why are &# ;who&# ;, &# ;when&# ;, &# ;where&# ;, &# ;what&# ; the important metadata fields (in many cases the most interesting question is &# ;why&# ;)? how about ranganathan&# ;s colon classification with personality, matter, energy, space, and time?

    in summary the motivation behind erc contains some good ideas, but its form is misdirected.

    ]]>
    jakob <![cdata[access to library accounts for better user experience]]> http://jakoblog.de/?p= - - t : : z - - t : : z i just stumbled upon readersfirst, a coalition of (public) libraries that call for a better user experience for library patrons, especially to access e-books. the libraries regret that

    the products currently offered by e-content distributors, the middlemen from whom libraries buy e-books, create a fragmented, disjointed and cumbersome user experience.

    one of the explicit goals of readersfirst is to urge providers of e-content and integrated library systems for systems that allow users to

    place holds, check-out items, view availability, manage fines and receive communications within individual library catalogs or in the venue the library believes will serve them best, without having to visit separate websites.

    in a summary of the first readersfirst meeting at january , the president of queens library (ny) is cited with the following request:

    the reader should be able to look at their library account and see what they have borrowed regardless of the vendor that supplied the ebook.

    this goal matches well with my activity at gbv: as part of a project to implement a mobile library app, i designed an api to access library accounts. the patrons account information api (paia) is current being implemented and tested by two independent developers. it will also be used to provide a better user experience in vufind discovery interfaces.

    during the research for paia i was surprised by the lack of existing methods to access library patron accounts. some library systems not even provide an internal api to connect to the loan system &# ; not to speak of a public api that could directly be used by patrons and third parties. the only example i could find was york university libraries with a simple, xml-based, read-only api. this lack of public apis to library patron accounts is disappointing, given that its almost ten years after the buzz around web . , service oriented architecture, and mashups. all all major providers of web applications (google, twitter, facebook, stackexchange, github etc.) support access to user accounts via apis.

    the patrons account information api will hopefully fill this gap with defined methods to place holds and to view checked out items and fines. papi is agnostic to specific library systems, aligned with similar apis as listed above, and designed with rdf in mind (without any need to bother with rdf, apart from the requirement to use uris as identifiers). feedback and implementations are very welcome!

    ]]>
    blog.cbeer.info blog.cbeer.info autoscaling aws elastic beanstalk worker tier based on sqs queue length ldpath in examples building a pivotal tracker irc bot with sinatra and cinch real-time statistics with graphite, statsd, and gdash icemelt: a stand-in for integration tests against aws glacier google 'one today' shuts down this week after several years - to google switch site exclusives pixel google pixel a google pixel google pixel xl google pixel a google pixel google pixel a nest google nest hub google nest hub max google nest mini google nest audio google home max google nest wifi google wifi nest thermostats nest cam nest hello nest protect android android auto wear os samsung oneplus lg motorola google tv android tv chromecast chromecast with google tv chrome stadia workspace gmail google meet google chat google calendar google docs google drive google keep youtube alphabet waymo verily life sciences deepmind google ventures google fiber access & energy calico videos reviews toggle main menu more social networks submit a tip / contact us trade in toggle dark mode search search toggle search to mac to toys electrek dronedj space explored about privacy january , google is killing its ‘one today’ donation app w/ only one week’s notice ben schoon - jan. th : pm pt @nexusben facebook twitter pinterest linkedin reddit for quite some time, google’s “one today” app has made it really easy to support nonprofit charities from a wide variety of causes. after a few years, though, google is set to kill off one today, and it’s only giving users a single week’s notice. not to be confused with google one, the storage hub for your account, one today has been available since . in an email to its users today (via android police), google explains that a decline in the use of one today over the years triggered the company to shut things down. beyond just allowing for easy donations to nonprofits, the one today app also hosted projects with photos and videos showing how your donation is helping the cause. the app also provided users with a single receipt at the end of the year to be used for tax deductions. google one today will be shut down next week, on february . by that date, % of funds donated prior will be sent to the relevant nonprofit organizations. after the th, the app will be turned off and all projects deleted. hello, we have an important update to share with you. we launched google one today seven years ago to help people donate to causes they care about. in the last few years, we have seen donors choose other products to fundraise for their favorite nonprofits. as a result, we will shut down one today on february , . new nonprofits will no longer be able to sign up for one today. the google one today app will be turned off, and any open projects will be deleted. we will ensure that % of funds donated on one today prior to february are disbursed to the relevant nonprofits. if you have any questions, please feel free to contact the one today team. thank you for your donations and partnership. the google one today team google one today wasn’t exactly a well-known product, so its loss probably won’t be felt by a ton of people. still, it’s a shame to see such a useful concept go to waste. more on google graveyard: google made an actual product graveyard for halloween w/ g+, reader, more google shuts down translator toolkit this week after a decade as google+ goes dark, here’s what the social network meant to the to google team ftc: we use income earning auto affiliate links. more. check out to google on youtube for more news: you’re reading to google — experts who break news about google and its surrounding ecosystem, day after day. be sure to check out our homepage for all the latest news, and follow to google on twitter, facebook, and linkedin to stay in the loop. don’t know where to start? check out our exclusive stories, reviews, how-tos, and subscribe to our youtube channel guides google one today about the author ben schoon @nexusben ben is a writer and video producer for to google. find him on twitter @nexusben. send tips to schoon@ to g.com or encrypted to benschoon@protonmail.com. ben schoon's favorite gear fitbit versa the best android smartphones galaxy watch hands-on: wear os reinvigorated [video] the future lost by making watches miniature phones google camera points to mp samsung sensor on pixel here’s everything new in android beta [gallery] cambridge bitcoin electricity consumption index (cbeci) cambridge bitcoin electricity consumption index bitcoin mining map note: average monthly hashrate share by country and region for the selected period, based on geolocational mining pool data. updates are scheduled on a monthly basis subject to data availability (generally with a delay of one to three months). all changes and updates are listed in the  change log.   download data in csv format download data in csv format note: seasonal variance in renewable energy production causes a pattern where mining operations are moving between regions within china to benefit from cheap and abundant power.  all information on this page is based on an exclusive sample of geolocational mining facility data collected in partnership with several bitcoin mining pools (please visit the methodology page for further information). we would like to thank btc.com, poolin, viabtc, and foundry for their contribution to this research project.  if you are a mining pool operator and would like to contribute to this research, please get in touch. figure : mining provinces figure : mining countries cambridge centre for alternative finance ©  san franciso symphony - sibelius: symphony no. in c major, opus skip to main content login my account my subscription renewal sign out help esa-pekka salonen music director esa-pekka salonen music director calendar donate calendar & tickets calendar & tickets calendar subscribe sfsymphony+ calendar subscribe sfsymphony+ covid- update throughline watch & listen gift certificates your visit & experience your visit your visit getting here what to expect hall amenities the neighborhood visitor faqs davies symphony hall van ness avenue san francisco, ca patron services: ( ) - box office: grove street, between van ness and franklin discover the music discover the music discover the music listen to podcasts history & archives sfs media watch and listen education & community education & community education & community education in sf schools school group visits sfs youth orchestra music connects, kids edition sfs youth orchestra donate & volunteer donate & volunteer donate volunteer  donate  volunteer donate & volunteer symphony fund membership company matches sfs endowments institutional partnerships ticket donation about sf symphony about sfs symphony esa-pekka salonen & conducting staff collaborative partners sf symphony musicians about sf symphony press room auditions careers contact us search support keyword search help my account sign out program notes sibelius: symphony no. in c major, opus print program notes jean sibelius born: december , . tavastehus (hämeenlinna), finland died: september , . järvenpää composed: also begun in , completed march , world premiere: march , . the composer conducted the konsertförening orchestra, at the auditorium in stockholm, sweden sfs performances: first—january . thomas beecham conducted. most recent—february . michael tilson thomas conducted instrumentation: flutes, oboes, clarinets, bassoons, horns, trumpets, trombones, timpani, and strings duration: about minutes symphony no. the backstory  in , sibelius was not only completing his fifth symphony and beginning his sixth; he was also getting underway with his seventh, which would ultimately bring his symphonic output to a close. work proceeded in parallel for some while, but not until sibelius signed off on his sixth could he focus on his seventh symphony without distraction, which he did for a further thirteen months. on may , , he wrote a letter to an unknown recipient that relates, in somewhat telegraphic style, the formative stage of his final three symphonies. this document provides one of the earliest glimpses into the seventh symphony as a work-in-progress:     my new works, partly sketched and planned. the fifth symphony is in a new form.…a spiritual intensification until the end. triumphal. the sixth symphony is wild and impassioned in character. somber, with pastoral contrasts.… the seventh symphony. joy of life and vitalité with appassionata passages. in three movements—the last a “hellenic rondo.” all this with due reservations...it looks as if i shall come out with all three of these symphonies at the same time.…as usual, the sculptural is more prominent in my music.…with regard to symphonies vi and vii, the plans may possibly be altered, depending on the way my musical ideas develop. as usual, i am a slave to my themes and submit to their demands.…these new symphonies of mine are more in the nature of professions of faith than my other works. sibelius makes clear that, when his seventh symphony was in its early stages, he sensed that it would comprise three separate movements. in the end, he brought everything together into a single movement lasting some twenty-two minutes. the form is not one traditionally associated with a symphony; in fact, sibelius intended to title the piece fantasia sinfonica. that’s what it was called when he conducted the premiere, and the first few ensuing performances. he changed his mind only shortly before the work’s publication, writing to the publisher wilhelm hansen, “best if its name is symphonie no. (in einem satze) [‘in one movement’],” thereby admitting it to the roster of his full-scale, “proper” symphonies. within that single, brief movement sibelius’s music passes through eleven discrete sections marked with differing tempos. some analysts, wanting to associate this piece more closely with traditional symphonic form, have linked some sections so as to suggest a four-movement piece. thus viewed, the groupings tend to fall something like this: i. adagio— ii. vivacissimo—adagio— iii. allegro molto moderato—allegro moderato— iv. vivace—presto—adagio—largamente molto— . . . with a coda consisting of affetuoso—tempo i other readings view the ultimate structure as an outgrowth of sibelius’s initial three-movement plan. and yet, it seems somehow unnecessary to explain away, and in a sense undermine, sibelius’s hard-won efforts to let this symphony flow according to its own will. the music commentator donald francis tovey compared the experience of listening to sibelius’s seventh to the sensation of flying in an aircraft. “an aeronaut carried with the wind,” he remarked, “has no sense of movement at all. . . . he moves in the air and can change his pace without breaking his movement.” the musicologist james hepokoski, author of the sibelius article in the the new grove dictionary of music and musicians (second edition), is a partisan of the four-section foundation, but he seems to find that aspect less important than the cohesive flow of this unusual work. “its ad hoc structure,” he writes, “emerges link-by-link from the transformational processes of the musical ideas themselves—a content-based form constantly in the process of becoming.” elsewhere in his article on the composer he proclaims the seventh symphony to be “surely sibelius’s most remarkable compositional achievement.” in this work, sibelius essentially created a bridge between the disparate world of the multi-movement symphony and the single-movement symphonic poem, and he did it with music that is in turn lofty, serene, dignified, and passionate, music of unearthly beauty—a summit achievement near the end of a productive career. the music  the piece opens with a triple-beat on the timpani—an echo of the way he had begun the second movement of his sixth symphony. it is a call-to-attention; and yet, marked piano, it is a whispered summons, a way for sibelius to advise us that, even when the ensuing symphony grows loud, we should be prepared for stillness to overtake the landscape without much warning. it is characteristic of sibelius to reverse himself in such ways—to reach a dynamic climax only to pull back unexpectedly in volume, to achieve a rapid tempo and suddenly back away into something slower, to arrive at a place of transcendent peacefulness that transforms into a pang of anguish. several themes play important roles in this symphony, one sometimes giving rise to another through related intervals or contours. they include a rising scale from the strings early on, and a descending response weaving gently through the woodwinds; a gleaming trombone solo of vast scope in the adagio (a recurring section, its recollection helping ground the somewhat free-form structure of this symphony); a fanfare-like theme in the winds for the allegro molto moderato section. though unquestionably anchored in the key of c, sibelius’s narrative is decidedly not that of a classical symphony based on sonata forms and rondos and the like; the “hellenic rondo” he initially envisaged largely evaporated as he worked on the piece, just as had the “wild and impassioned” character he had planned for the sixth symphony. nonetheless, the unrolling of the seventh symphony is clear to follow thanks to the specific qualities of its themes, the distinct flavors of his orchestral textures, and the defined character of its episodes. —james m. keller james m. keller is program annotator of the san francisco symphony and the new york philharmonic. his book chamber music: a listener’s guide (oxford university press) is now also available as an e-book and as an oxford paperback.  more about the music recordings:herbert blomstedt conducting the san francisco symphony (decca)  |  leif segerstam conducting the helsinki philharmonic (ondine)  |  neeme järvi conducting the gothenburg symphony (deutsche grammophon)  |  simon rattle conducting the city of birmingham symphony orchestra (emi) reading: jean sibelius, by erik tawaststjerna, in english translation by robert layton (faber & faber and university of california press; three volumes, out of print but peerless)  |  the music of jean sibelius, by burnett james (fairleigh dickinson university press)  |  sibelius, by andrew barnett (yale university press)  |  the cambridge companion to sibelius, edited by daniel m. grimley (cambridge) (june ) esa-pekka salonen music director your gift makes concerts possible. donate today donate today davies symphony hall van ness ave. san francisco, ca ( ) - patronservices@sfsymphony.org quick links about san francisco symphony seating chart press room contact us accessibility policy sign up for enews box office hours  (phone and emails only) mon-fri: am- pm  connect with us second century partner inaugural partner season partners official airline © san francisco symphony | privacy & terms  this website uses cookies. by continuing to browse the site you are agreeing to our use of cookies. learn more. please wait...   dshr's blog: chia network dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, september , chia network back in march i wrote proofs of space, analyzing bram cohen's fascinating ee talk. i've now learned more about chia network, the company that is implementing a network using his methods. below the fold i look into their prospects. chia network's blockchain first, it is important to observe that, although proofs of space and time are important for distributed storage networks such as filecoin's, this is not what chia network is using cohen's proofs of space and proofs of time (verifiable delay functions vdf) for. instead, they are using them as a replacement for proof of work in blockchains such as bitcoin's. here is the brief explanation of how they would be used i wrote in proofs of space: as i understand it, the proof of space technique in essence works by having the prover fill storage space with an array of pseudo-random points in [ , ] (a plot) via a time-consuming process. the verifier can then pose to the prover a question that can be answered either by a single storage access (fast) or by repeating the process of filling the storage (slow). by observing the time the prover takes the verifier can distinguish these two cases, and thus be assured that the prover has stored the (otherwise useless) data. as i understand it, verifiable delay functions work by forcing the prover to perform a specified number of iterations to generate a value that the verifier can quickly show is valid. to use in a blockchain, each block is a proof of space followed by a proof of time which finalizes it. to find a proof of space, take the hash of the last proof of time, put it on a point in [ , ], find the closest proof of space you can to that. to find the number of iterations of the proof of time, multiply the difference between those two positions by the current work difficulty factor and round up to the next integer. the result of this is that the best proof of space will finish first, with the distribution of arrival times of finalizations the same as happens in a proof of work system if resources are fixed over time. the only discretion left on the part of farmers is whether to withhold their winning proofs of space. in other words, the winning verification of a block will be the one from the peer whose plot contains the closest point, because that distance controls how long it will take for the peer to return its verification via the verifiable delay function. the more storage a peer devotes to its plot, and thus the shorter the average distance between points, the more likely it is to be the winner because its proof of space will suffer the shortest delay. there are a number of additional points that need to be understood: the process of filling storage with pseudo-random points must be slow enough that a peer cannot simulate extra storage by re-filling the limited storage it has with a new array of points. the process is presumably linear in the size of the storage to be filled, in the speed of the cpu, and in the write bandwidth of the storage medium. but since it is done only once, the slowness doesn't impair the proof of space time. the delay imposed by the proof of time must be orders of magnitude longer than the read latency of the storage medium, so that slow storage is not penalized, and fast storage such as ssds are not advantaged. the process of filling storage has to be designed so that it is impossible to perform iterative refinement, in which a small amount of permanent storage is used to find the neighborhood of the target point, then a small amount of storage filled with just that neighborhood, and the process repeated to zero in on the target, hoping that the return in terms of reduced vdf time, exceeds the cost of populating the small neighborhood. as i understand it, chia network's goals are that their blockchain would be more energy efficient, more asic-resistant, less centralized, and more secure than proof of work blockchains. given the appalling energy waste of proof of work blockchains, even if chia only managed the first of these it would be a really good thing. energy efficiency as regards energy efficiency, i believe a peer's duty cycle looks like this: compute hash, consuming infinitesimal energy in the cpu. access value from the disk by waking it from standby, doing a seek and a read, and dropping back into standby. this uses little energy. do a proof of time, which takes a long time with the cpu and ram using energy, and the disk in standby using perhaps w. so the overall energy consumption of posp/vdp depends entirely on the energy consumption of the proof of time, the proof of space is irrelevant. two recent papers on vdf are dan boneh et al's verifiable delay functions and a survey of two verifiable delay functions. the math is beyond me, but i believe their vdfs all involve iterating on a computation in which the output of each iteration is the input for the next, to prevent parallelization. if this is the case, during the proof of time one of the cpu's cores will be running flat-out and not using any ram. thus i don't see that the energy demands of these vdfs are very different from the energy demands of hashing, as in proof of work. so if every peer ran a vdf the chia network would use as much energy as bitcoin. but, as i understand it, this isn't what chia is planning. instead, the idea is that there would be a small number of vdf services, to which peers would submit their proofs of space. in this way the energy used by the network would be vastly smaller than bitcoin's. asic resistance the math of vdfs is based on imposing a number of iterations, not on imposing a delay in wall clock time. there may be an advantage to executing the vdf faster than the next peer. as david vorick argues: at the end of the day, you will always be able to create custom hardware that can outperform general purpose hardware. i can’t stress enough that everyone i’ve talked to in favor of asic resistance has consistently and substantially underestimated the flexibility that hardware engineers have to design around specific problems, even under a constrained budget. for any algorithm, there will always be a path that custom hardware engineers can take to beat out general purpose hardware. it’s a fundamental limitation of general purpose hardware. thus one can expect that if the rewards are sufficient, asics will be built to speed up any time-consuming algorithm (i agree that cohen's proof of space is unlikely to attract asics, since the time it takes is negligible). i don't know what happens if a vdf service a with asics gets a point with distance d and, thanks to the asics, announces it significantly before vdf service b without asic assistance gets a point with distance d-ε and announces it. in the bitcoin blockchain, all possible solutions to a block are equally valid. a later, different solution to the block is just later, and thus inferior. but in the chia blockchain a later solution to the block could be unambiguously better (i.e. provably having performed fewer iterations) if it came from a slower peer. how does the network decide how long to wait for a potentially better solution? if the network doesn't wait long enough, and the rewards are enough, it will attract asics. i believe that chia hopes that the vdf services will all use asics, and thus be close in speed. but there won't be enough vdf servers to make a market for asics. tim swanson estimates that there are about . m antminer s s mining on the bitcoin blockchain. that's a market for asics, but a few s isn't. perhaps fpgas could be used instead. centralization proof of work blockchains become centralized through the emergence of mining pools: bitcoin gives out a block reward, btc as of today, every minutes on average. this means that a miner whose power is a small fraction of the total mining power is unlikely to win in a very long time. since this can take years, miners congregate in pools. a pool has a manager, and whenever a participant in the pool finds proof of work, the reward goes to the manager, which distributes it among the participants according to their contribution to the pool. but the emergence of pools isn't due to proof of work, it is a necessary feature of every successful blockchain. consider a blockchain with a -minute block time and k equal miners. a miner will receive a reward on average once in . years. if the average chia network miner's disk in such a network has a -year working life the probability that it will not receive any reward is . %. this is not a viable sales pitch to miners; they join pools to smooth out their income. thus, all other things being equal, a successful blockchain would have no more than say equal pools, generating on average a reward every hours. but all other things are not equal, and this is where brian arthur's increasing returns and path dependence in the economy comes in. randomly, one pool will be bigger than the others, and will generate better returns through economies of scale. the better returns will attract miners from other pools, so it will get bigger and generate even better returns, attracting more miners. this feedback loop will continue, as it did with ghash.io. mining pools / / bitcoin's mining pools are much larger than needed to provide reasonable income smoothing, though the need to retain the appearance of decentralization means that it currently takes pools (two apparently operated by bitmain) to exceed % of the mining power. presumably economies of scale generating better return on investment account for the additional size. as regards chia network's centralization: just as with bitcoin's proof of work, farming pools would arise in posp/vdf to smooth out income. what would pools mining a successful chia network blockchain look like? lets assume initially that it is uneconomic to develop asics to speed up the vdf. there would appear to be three possible kinds of participants in a pool: individuals using the spare space in their desktop pc's disk. the storage for the proof of space is effectively "free", but unless these miners joined pools, they would be unlikely to get a reward in the life of the disk. individuals buying systems with cpu, ram and disk solely for mining. the disruption to the user's experience is gone, but now the whole cost of mining has to be covered by the rewards. to smooth out their income, these miners would join pools. investors in data-center scale mining pools. economies of scale would mean that these participants would see better profits for less hassle than the individuals buying systems, so these investor pools would come to dominate the network, replicating the bitcoin pool centralization. thus if chia's network were to become successful, mining would be dominated by a few large pools. each pool would run a vdf server to which the pool's participants would submit their proofs of space, so that the pool manager could verify their contribution to the pool. the emergence of pools, and dominance of a small number of pools, has nothing to do with the particular consensus mechanism in use. thus i am skeptical that alternatives to proof of work will significantly reduce centralization of mining in blockchains generally, and in chia network's blockchain specifically. security like radia perlman, eric budish in the economic limits of bitcoin and the blockchain (commentary on it here) observes that: from a computer security perspective, the key thing to note about ( ) is that the security of the blockchain is linear in the amount of expenditure on mining power, ... in contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock. budish shows that the security of a blockchain against a % attack depends on the per-block mining reward being large relative to the maximum value of the transactions in a block. reducing the expenditure on mining makes the blockchain less secure. as i wrote in cryptocurrencies have limits: budish combines equations & to get equation : pblock > vattack⁄α this inequality expresses the honest equilibrium condition for deterring an outsider's % attack (to deter insiders, pblock has to be twice as big): the equilibrium per-block payment to miners for running the blockchain must be large relative to the one-off benefits of attacking it. equation ( ) places potentially serious economic constraints on the applicability of the nakamoto ( ) blockchain innovation. by analogy, imagine if users of the visa network had to pay fees to visa, every ten minutes, that were large relative to the value of a successful one-off attack on the visa network. this is why, especially with the advent of "mining as a service", % attacks on alt-coins have become endemic. there is nothing specific to proof of work in budish's analysis; the same limit will apply to chia network's blockchain. but cohen hopes to motivate the use of spare space in individual's desktops by keeping mining rewards small. his first slide claims: as long as rewards are below depreciation value it will be unprofitable to buy storage just for farming i believe that by depreciation value he means the depreciation of all the storage in the network. robert fontana and gary decad's numbers are vendor revenue of $ . /gb for hard disk. assuming -year straight-line depreciation ( , -minute blocks), the block reward and thus the maximum value of the transactions in a block must be less than $ . /pb. kryder's law means this limit will decrease with time. note also that it is possible at low cost to rent very large amounts of storage and computation for short periods of time in order to mount a % attack on a posp/vdp network in the same way that "mining as a service" enables % attacks on alt-coins. for example, a back-of-the-envelope computation of a hour-long petabyte attack at amazon would start by using about days of aws free tier to write the data to the sc version of elastic block storage. the ebs would cost $ /pb/hr, so the setup would cost $ . then the actual hour-long attack would cost $ , for a total of just under $ . i'm skeptical that the low-reward approach to maintaining decentralization is viable. burstcoin it turns out that for the past four years there has been a proof-of-space coin, burstcoin. its "market cap" spiked on launch and when bitcoin spiked, but it is currently under . k btc with a -hr volume just over btc. there are currently cryptocurrencies with greater -day volume than burst. so it hasn't been a great success. as expected, mining is dominated by a few pools. despite the lack of success, burstcoin claims to have about pb of storage devoted to mining. lets take that at face value for now. using the same numbers as above, that's $ m in capital. with -year straight-line depreciation and a -minute block time ( k blocks) the depreciation per block is $ . per block. the reward is currently coins/block or at current "market price" $ . , so they are below bram cohen's criterion for not being worth investing just for mining. despite that, last year people were claiming to be mining with - tb, and earning ~ coins/day. that clearly isn't happening today. there are blocks/day so the daily mining reward is , coins, or $ . k. tb would be . e- of the claimed mining power, so should gain . coins or $ . /day. cheap tb drives at newegg would be around $ k, so the -year depreciation would be $ . /day for a loss of $ . /day before considering power, etc. and as the rewards decreased the loss would increase. suppose i have tb spare space in my desktop pc. i'd have e- of the claimed mining power so, before pool expenses, i could expect to earn about c/day, decreasing. why bother? mining burstcoin only makes sense if you have free access to large amounts of spare disk space for substantial periods. despite the rhetoric, it isn't viable to aggregate the unused space in people's desktops. deployment the history of burstcoin might be seen as a poor omen for chia. but bittorrent has given cohen a stellar reputation, and chia has funding from andreessen horwitz, so it is likely to fare better. nevertheless, chia mining isn't likely to be the province of spare space on people's desktops: there are fewer and fewer desktops, and the average size of their disks is decreasing as users prefer smaller, faster, more robust ssds to the larger, slower, more fragile . " drives that have mostly displaced the even larger . " drives in desktops. the only remaining market for large, . " drives is data centers, and drive manufacturers are under considerable pressure to change the design of their drives to make them more efficient in this space. this will make them unsuitable for even those desktops that want large drives. so, if the rewards are low enough to prevent investment in storage dedicated to mining, where is the vast supply of free hard disk space to come from? i can see two possible sources: manufacturers could add to their hard drives' firmware the code to fill them with a plot during testing and then mine during burn-in before shipment. this is analogous to what butterfly labs did with their mining hardware, just legal. cloud data centers need spare space to prepare for unexpected spikes in demand. experience and "big data" techniques can reduce the amount, but enough uncertainty remains that their spare space is substantial. so it is likely that the chia blockchain would be dominated by a small number of large companies ( disk manufacturers, maybe clouds). arguably, these companies would be more trustworthy and more decentralized than the current bitcoin miners. posted by david. at : am labels: bitcoin, cloud economics, kryder's law, storage costs no comments: post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ▼  september ( ) web archives as evidence vint cerf on traceability blockchain solves preservation! what does data "durability" mean chia network ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: proof-of-stake in practice dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, march , proof-of-stake in practice at the most abstract level, the work of eric budish, raphael auer, joshua gans and neil gandal is obvious. a blockchain is secure only if the value to be gained by an attack is less than the cost of mounting it. these papers all assume that actors are "economically rational", driven by the immediate monetary bottom line, but this isn't always true in the real world. as i wrote when commenting on gans and gandal: as we see with bitcoin's lightning network, true members of the cryptocurrency cult are not concerned that the foregone interest on capital they devote to making the system work is vastly greater than the fees they receive for doing so. the reason is that, as david gerard writes, they believe that "number go up". in other words, they are convinced that the finite supply of their favorite coin guarantees that its value will in the future "go to the moon", providing capital gains that vastly outweigh the foregone interest. follow me below the fold for a discussion of a recent attack on a proof-of-stake blockchain that wasn't motivated by the immediate monetary bottom line. steem was one of the efforts to decentralize the web discussed in the mit report: they pointed out that: right now, the distribution of sp across users in the system is very unequal -- more than % of sp tokens are held by less than % of account holders in the system. this immense disparity in voting power complicates steemit’s narrative around democratized content curation -- it means that a very small number of users are extremely influential and that the vast majority of users’ votes are virtually inconsequential. now this has proven true. david gerard reports that: distributed proof-of-stake leaves your blockchain open to takeover bids — such as when justin sun of tron tried to take over the steem blockchain, by enlisting exchanges such as binance to pledge their holdings to his efforts. gerard links to yulin's cheng's tron takeover? steem community in uproar as crypto exchanges back reversal of blockchain governance soft fork, a detailed account of events. first: on feb. , steemit entered into a "strategic partnership" with tron that saw steemit's chairman declare on social media that he had sold steemit to [justin sun]," referring to tron's founder. the result was that: concerns that tron might possess too much power over the network resulted in a move by the steem community on feb. to implement a soft fork. the soft fork deactivated the voting power of a large number of tokens owned by tron and steemit. that was soft fork . . one week later, on march nd, tron arranged for exchanges, including huobi, binance and poloniex, to stake tokens they held on behalf of their customers in a % attack: according to the list of accounts powered up on march. , the three exchanges collectively put in over million steem power (sp). with an overwhelming amount of stake, the steemit team was then able to unilaterally implement hard fork . to regain their stake and vote out all top community witnesses – server operators responsible for block production – using account @dev as a proxy. in the current list of steem witnesses, steemit and tron’s own witnesses took up the first slots. although this attack didn't provide tron with an immediate monetary reward, the long term value of retaining effective control of the blockchain was vastly greater than the cost of staking the tokens. i've been pointing out that the high gini coefficients of cryptocurrencies means proof-of-stake centralizes control of the blockchain in the hands of the whales since 's why decentralize? quoted vitalik buterin pointing out that a realistic scenario was: in a proof of stake blockchain, % of the coins at stake are held at one exchange. or in this case three exchanges cooperating. apparently, the tokens that soft fork .   blocked from voting were mined before the blockchain went live and retained by steemit: "the stake was essentially premined and was always said to be for on-boarding and community building. the witnesses decided to freeze it in an attempt to prevent a hostile takeover of the network,” [@jeffjagoe] told the block. "but they forgot justin has a lot of money, and money buys buddies at the exchanges." vitalik buterin commented: "apparently steem dpos got taken over by big exchanges voting with depositors' funds," he tweeted. "seems like the first big instance of a 'de facto bribe attack' on coin voting (the bribe being exchs giving holders convenience and taking their votes." as buterin wrote in , proof-of-stake turned out to be non-trivial. posted by david. at : am labels: bitcoin, distributed web no comments: post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ▼  march ( ) archival cloud storage pricing more on failures from fast proof-of-stake in practice enterprise ssd reliability guest post: michael nelson's response falling research productivity revisited ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. information literacy competency standards for higher education   ala institutional repository information literacy competency standards for higher education login alair home → divisions → association of college & research libraries (acrl) → guidelines, standards, and frameworks → view item javascript is disabled for your browser. some features of this site may not work without it. information literacy competency standards for higher education uri: http://hdl.handle.net/ / date: - abstract: the information literacy competency standards for higher education (originally approved in january ) were rescinded by the acrl board of directors on june , , at the ala annual conference in orlando, florida, which means they are no longer in force. show full item record files in this item name: acrl information ... size: . kb format: pdf description: information literacy ... view/open this item appears in the following collection(s) guidelines, standards, and frameworks search search this collection browse all content communities & collections by issue date authors titles subjects this collection by issue date authors titles subjects my account login register dspace software copyright ©  -   lyrasis   contact us | send feedback   a survey on long-range attacks for proof of stake protocols | ieee journals & magazine | ieee xplore skip to main content ieee account change username/password update address purchase details payment options order history view purchased documents profile information communications preferences profession and education technical interests need help? us & canada: + worldwide: + contact & support about ieee xplore contact us help accessibility terms of use nondiscrimination policy sitemap privacy & opting out of cookies a not-for-profit organization, ieee is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © copyright ieee - all rights reserved. use of this web site signifies your agreement to the terms and conditions. baserow: open source no-code database and airtable alternative product premium pricing templates developers documentation openapi specification api blog jobs contact repository github sponsor login register product premium pricing templates developers documentation getting started start baserow locally creating a plugin openapi specification api gitlab repository want to contribute? blog jobs contact become a sponsor gitlab repository login register new . july release open source no-code database and airtable alternative create your own online database without technical experience. our user friendly no-code tool gives you the powers of a developer without leaving your browser. create account already have an account? prefer to self host? deploy baserow to heroku, cloudron or ubuntu no-code platform that grows are your projects, ideas or business processes unorganized or unclear? do you have many tools for one job? with baserow you decide how you want to structure everything. whether you’re managing customers, products, airplanes or all of them. if you know how a spreadsheet works, you know how baserow works. flexible software software tailored to your needs instead of the other way around. clear and accessible data by all your team members. never unorganized projects, ideas and notes anymore. one interface for everything. easily integrate with other software. collaborate in realtime. unlimited rows. fast! developer friendly easily create custom plugins with our boilerplate or use third party ones. because baserow is built with modern and proven frameworks it feels like a breeze for developers. built with django and nuxt. open source. self hosted. headless and api first. works with postgresql. supports custom and third party plugins. ready to bring structure to your organisation? we're in such an early phase that we can’t yet offer you everything we want. because we appreciate everyone who tries out baserow you may use the saas version for free! for now at least. hosted saas version free for now for people and companies that want to try out an early version. early access to the latest features. unlimited databases and tables. support via the live contact form and email. organize your workflow and projects. receive the latest updates. no costs at this point. create account self hosted open source always free for everyone that wants to self host or develop custom plugins. can be self hosted. unlimited users, rows and databases. easy install with our step by step guide. will always be free. mit license. custom plugins. baserow repository early premium € per user / month for companies with advanced needs. can be self hosted. unlimited users, rows and databases. admin dashboard. role based permissions. kanban and calendar views. lots of other features. more information more detailed pricing make your business future proof spend time on running and innovating your business without worrying about software or lost data. because of our open source nature and dedicated development team you decide it all. short development cycles frequent releases and fast bug fixes make sure you never fall behind. no vendor lock-in our open source core means that you can run baserow independently on your own server. blazingly fast continuously tested with . + rows per table while in development. connect with software baserow is api first which means it is made to connect with other software. project tracker templates find inspiration or a starting point for your database in one of our templates: explore all templates applicant tracker personal task manager feature roadmap v march templates, search and performance the ability to install templates, re-order field columns, additional date filters, searching in grid view, phone field and a huge interface performance improvement. april order rows and user admin order rows by drag and drop, manage users as admin premium, run baserow on your own device locally. may exporting, importing and more admin exporting to csv, json and xml, an admin dashboard and group management premium and importing additional formats like excel and json. june trash and form view restore deleted items with trash functionality, form view and re-ordering of applications, tables, views and select options. july form view, date fields and row comments improved form view, created on field, last modified field, link to table filters and row comments premium. august advanced fields and zapier formula field, multiple select field, lookup field and an integration with zapier. september kanban view and web hooks kanban view premium to track progress, web hooks and configurable row height october undo redo and gallery view advanced undo redo functionality and a gallery to list your data in a more user friendly and manageable way. november public view sharing and multiple copy paste public grid view sharing, additional link row filters, n n node and copy pasting multiple values. december footer calculations and coloring of rows different type of footer calculations, coloring of rows premium and link to table field improvements. v git clone https://gitlab.com/bramw/baserow cloning into "baserow"... cd baserow docker-compose up starting db           ... done starting backend      ... done starting celery       ... done starting web-frontend ... done ... echo "visit http://localhost: " open source easily create plugins or contribute you don’t have to spend time on reinventing the wheel. we use modern tools and frameworks like docker, django, vue.js and nuxt.js so you can easily write plugins or contribute. use our boilerplate and documentation to jumpstart your plugin. baserow repo read the docs plugin boilerplate early premium version does it sound good if you could export your data directly to excel, xml or json, have role based permissions and the ability to place comments on row level? would you like to the visualize your data using a kanban or calendar view? then the premium version might be something for you. it also includes an admin panel, signup rules, sso login and more. more information our blog view all blog posts release august , by bram wiepjes july release of baserow the july release of baserow contains new created on / last updated field types, a one-click heroku install, row comments (premium), new templates and much more! info march , by bram wiepjes best excel alternatives info may , by bram wiepjes best airtable alternatives log in register contact gitlab repository sponsor twitter product premium pricing developer documentation openapi specification api blog july release of baserow best airtable alternatives best excel alternatives under the hood of baserow building a database legal privacy policy terms & conditions newsletter stay up to date with the lates developments and releases by signing up for our newsletter. sign up © copyright baserow all rights reserved. bethany nowviskie bethany nowviskie cultural memory and the peri-pandemic library [late last month, i was honored to deliver the annual james e. mcleod memorial lecture on higher education at washington... foreword (to the past) congratulations to melissa terras and paul gooding on the publication of an important new collection of essays entitled electronic legal... a pledge: self-examination and concrete action in the jmu libraries “the beauty of anti-racism is that you don’t have to pretend to be free of racism to be an anti-racist.... change us, too [the following is a brief talk i gave at the opening plenary of rbms , a meeting of the rare... from the grass roots [this is a cleaned-up version of the text from which i spoke at the conference of research libraries uk,... how the light gets in i took a chance on a hackberry bowl at a farmer’s market—blue-stained and turned like a drop of water. it’s... reconstitute the world [what follows is the text of a talk i gave in two different contexts last week, as &# ;reconstitute the world:... spectra for speculative knowledge design [last weekend, i joined the inspiring, interdisciplinary ecotopian toolkit gathering hosted by penn&# ;s program in environmental humanities. (how lucky was i? we even got... we raise our voices [crossposted statement on us administration budget proposal from the &# ;director&# ;s desk&# ; at the digital library federation blog.] last night, the... iv. coda: speculative computing ( ) [shannon mattern&# ;s wry observation that &# ;speculative now seems to be the universal prefix&# ; got me thinking about time and unpredictability, and reminded me... inauguration day january th has inaugurated the worst and longest case of writer’s block of my life. i hate to write, under... open invitations [these are unedited remarks from the closing plenary of the dlf forum, written about minutes before it began,... speculative collections [this is the text of a talk i gave last week, as &# ;speculative collections and the emancipatory library,&# ; to close... alternate futures/usable pasts [while i&# ;m cleaning up the text of a talk i gave at harvard&# ;s hazen symposium last week (see #hazenatharvard or merrilee&# ;s... everywhere, every when this is the text of a presentation i made yesterday at a wonderful columbia university symposium called insuetude (still ongoing),... capacity through care [this is the draft of an invited contribution to a forum on &# ;care&# ; that will appear in debates in the... hallowmas [trigger warning: miscarriage.] ten years ago today, i lost the baby that might have come after my son, and not... on capacity and care [this is the blended and edited text of two talks i gave last week. one, titled &# ;on capacity and care,&# ;... supporting practice in community [here&# ;s a cleaned-up version of brief remarks i made in a panel discussion on &# ;cultivating digital library professionals,&# ; at tuesday&# ;s... a game nonetheless [i recently had the pleasure of responding to a creative and beautifully grounded talk by kevin hamilton of the university... open and shut i recently collaborated on a project a little outside the ordinary for me: a case study for a chapter in... all at once thirteen years ago, i was a graduate student in english literature when the twin towers collapsed, a fireball erupted from... charter-ing a path [cross-posted from the re:thinking blog at clir, the council on library and information resources, where i&# ;m honored to serve as... speculative computing & the centers to come [this is a short talk i prepared for a panel discussion today with brett bobley, ed ayers, and stephen robertson,... johannes factotum & the ends of expertise [this—more or less—is the text of a keynote talk i delivered last week in atlanta, at the dlf forum:... neatline & visualization as interpretation [this post is re-published from an invited response to a february  mediacommons question of the week: &# ;how can we better... a kit for hosting speaking in code [cross-posted from the re:thinking blog at clir, the council on library and information resources, where i&# ;m honored to be serving... digital humanities in the anthropocene [update: i&# ;ve made low-res versions of my slides and an audio reading available for download on vimeo, alex gil has... anthropocene abstract i am deeply honored to have been invited to give a plenary lecture at this year&# ;s digital humanities conference, planned... asking for it a report published this week by oclc research asks the burning question of no one, no where: &# ;does every research... dshr's blog: mining is money transmission (updated) dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, june , mining is money transmission (updated) in how to start disrupting cryptocurrencies: “mining” is money transmission, nicholas weaver makes an important point that seems to have been overlooked (my emphasis): the mining process starts with a pile of unconfirmed digital checks, cryptographically signed by the accounts’ corresponding private keys (in public key cryptography, only the private key can generate a signature but anyone can verify the signature with the public key). each miner takes all the checks and decides which ones they are going to consider. miners first have to make sure that each check they consider is valid and that the sending account has sufficient funds. miners then choose from the set of valid checks they want to include and collect them together in a “block.” below the fold, i look into the implications weaver draws from this. the main implication is that miners are providing money transmission services under us law: the term “money transmission services” means the acceptance of currency, funds, or other value that substitutes for currency from one person and the transmission of currency, funds, or other value that substitutes for currency to another location or person by any means. thus, in the us, they are required to follow the anti-money laundering/know your customer (aml/kyc) rules: not only do the miners have to make sure checks are valid, but they also have to make numerous choices beyond this, usually focused on maximizing revenue by selecting the checks that provide the highest fee to the miner. so a miner who creates a block is explicitly making decisions about which transactions to confirm. this successful miner ... is a money transmitter. and these miners are transmitting a lot of value. let us examine a single bitcoin block — the newest block when i wrote this paragraph. in this block the miner, “f pool,” confirmed , transactions representing a notional value of $ . billion. of course many of these transactions are simply noise (the bitcoin blockchain is notorious for transactions that do not represent real transactions), but even the “small” transactions represent several hundred dollars moving between pseudonymous numbered accounts. and each and every one of them was processed, validated, selected and recorded by this one mining pool. and there is an existence proof that miners can use their freedom to choose which transactions to include in the blocks they mine to exclude transactions from unknown parties: there is proof that one can attempt to produce a “sanctions-compliant” mining pool. marathon digital holdings is a small mining pool (roughly percent of the current mining rate). during the month of may, marathon used a risk-scoring method to select transactions, intending to create bitcoin blocks untainted by money laundering or other criminal activity. yet they stopped doing this because the larger bitcoin community objects to the idea of attempting to restrict bitcoin to legal uses! david gerard comments: nicholas weaver points out that this completely gives the game away: miners have always been able to comply with money transmission rules, they just got away with not doing it. in the us the aml/kyc rules are enforced by the financial crimes enforcement network (fincen). most countries follow fincen's lead because the penalty for not doing so can be loss of access to the western world's banking system: this basic observation — that cryptocurrency miners, no matter the cryptocurrency itself, are money transmitters and should be treated as such — would effectively outlaw bitcoin, ethereum and other cryptocurrency mining in most of the world. and some nations that generally don’t follow fincen’s model, notably iran and china, are cracking down on bitcoin mining because it poses both a local money-laundering threat and an obscene waste of energy. source the chinese government's moves to shut down cryptocurrency mining are already having a significant effect. david gerard reports: hashcow will no longer sell mining rigs in china. sichuan duo technology put its machines up for sale on wechat. btc.top, which does % of all bitcoin mining, is suspending operations in china, and plans to mine mainly in north america. [time] mining rigs are for sale at – % off chinese miners are looking to set up elsewhere. some are looking to kazakhstan. [wired] some have an eye on texas — a state not entirely famous for its robust grid and ability to keep the lights on in bad weather. [cnbc] weaver points out the entrepreneurial opportunity a collapse of the hash rate opens up: additionally, bitcoin and other proof-of-work cryptocurrencies have a security weakness: the system is secure only as long as there is a lot of continuously wasted effort. if the available mining drops precipitously, this enables attackers to rewrite history (a rewriting process that, if it only removes transactions, is arguably not a money transmitter). i’m certain ransomware victims and their insurers would pay $ million to a service that would undo a $ million payment. he concludes: it is time to seriously disrupt the cryptocurrency ecology. directly attacking mining as incompatible with the bank secrecy act is one potentially powerful tool. the whole post is well worth reading. update july th: three days after i posted this, nicholas weaver co-authored a follow-up article with bruice schneier entitled how to cut down on ransomware attacks without banning bitcoin which is also well worth reading. they write: ransomware isn’t new; the idea dates back to with the “brain” computer virus. now, it’s become the criminal business model of the internet for two reasons. the first is the realization that no one values data more than its original owner, and it makes more sense to ransom it back to them — sometimes with the added extortion of threatening to make it public — than it does to sell it to anyone else. the second is a safe way of collecting ransoms: bitcoin. alas, this is already out-of-date. when the darkside gang hit colonial pipeline: colonial pipeline paid in bitcoin, despite that option requiring an additional percent added to the ransom. darkside made a mistake in handling the roughly btc and dan goodin reported that us seizes $ . million colonial pipeline paid to ransomware attackers:: source "on monday, the us justice department said it had traced . of the roughly bitcoins colonial pipeline paid to darkside the % additional ransom was for payment in bitcoin rather than the more anonymous monero. the ransomware industry has learned from this not to allow payment in bitcoin. lawrence abrams reports in revil ransomware hits , + companies in msp supply-chain attack: the ransomware gang is demanding a $ , , ransom to receive a decryptor from one of the samples. the image of the demand shows that payment in monero is now the only option. nevertheless, weaver and schneier's argument that the ransomware industry can be disrupted by targeting exchanges is plausible: criminals and their victims act differently. victims are net buyers, turning millions of dollars into bitcoin and never going the other way. criminals are net sellers, only turning bitcoin into currency. the only other net sellers are the cryptocurrency miners, and they are easy to identify. any banked exchange that cares about enforcing money laundering laws must consider all significant net sellers of cryptocurrencies as potential criminals and report them to both in- country and u.s. financial authorities. any exchange that doesn’t should have its banking forcefully cut. the u.s. treasury can ensure these exchanges are cut out of the banking system. by designating a rogue but banked exchange, the treasury says that it is illegal not only to do business with the exchange but for u.s. banks to do business with the exchange’s bank. as a consequence, the rogue exchange would quickly find its banking options eliminated. they also agree with my suspicion that tether has a magic money pump when they write: while most cryptocurrencies have values that fluctuate with demand, tether is a “stablecoin” that is supposedly backed one- to-one with dollars. of course, it probably isn’t, as its claim to be the seventh largest holder of commercial paper (short-term loans to major businesses) is blatantly untrue. instead, they appear part of a cycle where new tether is issued, used to buy cryptocurrencies, and the resulting cryptocurrencies now “back” tether and drive up the price. this behavior is clearly that of a “wildcat bank,” a s fraudulent banking style that has long been illegal. tether also bears a striking similarity to liberty reserve, an online currency that the department of justice successfully prosecuted for money laundering in . shutting down tether would have the side effect of eliminating the value proposition for the exchanges that support chain swapping since these exchanges need a “stable” value for the speculators to trade against. i would add that, while they are correct to write: banning cryptocurrencies like bitcoin is an obvious solution. but while the solution is conceptually simple, it’s also impossible because — despite its overwhelming problems — there are so many legitimate interests using cryptocurrencies, albeit largely for speculation and not for legal payments. source bitcoin is almost impossible to use directly to pay for legal goods and services; both volatility and irreversibility mean that it has to be converted into fiat before the transaction. monero has these problems in spades, plus the problem that any banked exchange could not trade it. so it has to be traded into bitcoin before being traded into fiat. thus any exchange account buying or selling monero (or one of the smaller anonymous cryptocurrencies such as zcash) falls under suspicion of crimes such as ransomware or money laundering, and should be reported. posted by david. at : am labels: bitcoin comments: david. said... an indication that western governments are not happy with cryptocurrencies is that neither the imf: "adoption of bitcoin as legal tender raises a number of macroeconomic, financial and legal issues that require very careful analysis" nor the world bank: "while the government did approach us for assistance on bitcoin, this is not something the world bank can support given the environmental and transparency shortcomings" approve of el salvador's scheme to convert dollar remittances to tethers. june , at : am david. said... around am this morning btc's "price" spiked . % from around $ . k to around $ . k. the probable cause was unconfirmed reports that renowned hodl-er mircea popescu had drowned off costa rica. the death of a hodl-er is always good news for bitcoin as they are likely to have taken the keys to their hodl-ings, which in popescu's case are thought to amount to around % of all the bitcoin there will ever be, with them. less supply = higher price, according to the tenets of austrian economics. anthony “pomp” pompliano, in a now-deleted tweet, celebrated thus: 'mircea popescu, a bitcoin og, has passed away. he likely owned quite a bit of bitcoin. we may never know how much or if they are lost forever, but reminds me that satoshi said: "lost coins only make everyone else's coins worth slightly more. think of it as a donation to everyone."' june , at : am david. said... the need for regulation fo cryptocurrencies is evident from misrylena egkolfopolou and charlie wells' crypto scammers rip off billions as pump-and-dump schemes go digital: "it might sound like a joke, given the crypto meltdowns of late, but serious money is at stake here. billions — real billions — are getting pilfered annually through a variety of cryptocurrency scams. the way things are going, this will only get worse. ... nowadays crypto hustlers and star-gazers like titan maxamus have established a weird symbiotic relationship. it seems to capture everything that’s gone wrong with money culture, from reddit-fueled thrill-seeking to conspiracy theorizing to predatory wheeling-dealing. the rug pull is only one play. there’s also the gentler soft rug, the crypto version of getting ghosted on hinge. and the honey pot, which functions like a trap. old-fashioned ponzi schemes, newly cryptodenominated, have swindled people out of billions too." july , at : am david. said... and another reason for regulation in mike peterson's fake apple stocks are starting to trade on various blockchain platforms: "synthetic versions of popular technology stocks like apple, tesla, and amazon have started trading on blockchains, joining a growing pool of various crypto assets. the digital assets are engineered to reflect the prices of the stocks that they reflect, but no actual trading of real stocks is involved. although sales volumes are still just a tiny percentage of trades on actual exchanges, crypto enthusiasts are excited about the potential. for proponents, it's a way to trade stock-like assets without any of the restrictions. ... traders can exchange the synthetic stocks anonymously, hours a day, and without restrictions like "know your client" rules or capital controls. ... of course, unregulated finance options like the synthetic tokens could soon draw the attention of enforcement agencies like the securities and exchange commission. billionaire crypto investor mike novogratz, for example, recently said that decentralized finance companies should start abiding by some rules soon to avoid the ire of regulators." the whole point of permissionless blockchains is that "abiding by some rules soon" is a bug. july , at : am david. said... in the oncoming ransomware storm stephen diehl continues to point to suppressing the payment channel as the way to stop the dystopian ransomware future: "imagine a world in which every other month you’re forced to bid for your personal data back from hackers who continuously rob you. and a world where all of this is is so commonplace there are automated darknet marketplaces where others can bid on your data, and every detail of your personal life is up for sale to the highest bidder. every private text, photo, email, and password is just a digital commodity to be traded on the market. because that’s what the market demands and that’s what capitalism left unchecked will provide." july , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ▼  june ( ) taleb on cryptocurrency economics china's cryptocurrency crackdown dna data storage: a different approach mining is money transmission (updated) mempool flooding meta: apology to commentors unreliability at scale unstoppable code? ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: alternatives to proof-of-work dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, july , alternatives to proof-of-work the designers of peer-to-peer consensus protocols such as those underlying cryptocurrencies face three distinct problems. they need to prevent: being swamped by a multitude of sybil peers under the control of an attacker. this requires making peer participation expensive, such as by proof-of-work (pow). pow is problematic because it has a catastrophic carbon footprint. a rational majority of peers from conspiring to obtain inappropriate benefits. this is thought to be achieved by decentralization, that is a network of so many peers acting independently that a conspiracy among a majority of them is highly improbable. decentralization is problematic because in practice all successful cryptocurrencies are effectively centralized. a rational minority of peers from conspiring to obtain inappropriate benefits. this requirement is called incentive compatibility. this is problematic because it requires very careful design of the protocol. in the rather long post below the fold i focus on some potential alternatives to pow, inspired by jeremiah wagstaff's subspace: a solution to the farmer’s dilemma, the white paper for a new blockchain technology. careful design of the economic mechanisms of the protocol can in theory ensure incentive compatibility, or as ittay eyal and emin gun sirer express it: the best strategy of a rational minority pool is to be honest, and a minority of colluding miners cannot earn disproportionate benefits by deviating from the protocol they showed in that the bitcoin protocol was not incentive-compatible, but this is in principle amenable to a technical fix. unfortunately, ensuring decentralization is a much harder problem. decentralization vitalik buterin, co-founder of ethereum, wrote in the meaning of decentralization: in the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. the internet's basic protocols, tcp/ip, dns, smtp, http are all decentralized, and yet the actual internet is heavily centralized around a few large companies. centralization is an emergent behavior, driven not by technical but by economic forces. w. brian arthur described these forces before the web took off in his book increasing returns and path dependence in the economy. similarly, the blockchain protocols are decentralized but ever since the bitcoin blockchain has been centralized around - large mining pools. buterin wrote: can we really say that the uncoordinated choice model is realistic when % of the bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? this is perhaps the greatest among the multiple failures of satoshi nakamoto's goals for bitcoin. the economic forces driving this centralization are the same as those that centralized other internet protocols. i explored how they act to centralize p p systems in 's economies of scale in peer-to-peer networks. i argued that an incentive-compatible protocol wasn't adequate to prevent centralization. the simplistic version of the argument was: the income to a participant in an incentive-compatible p p network should be linear in their contribution of resources to the network. the costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale. thus the proportional profit margin a participant obtains will increase with increasing resource contribution. thus the effects described in brian arthur's increasing returns and path dependence in the economy will apply, and the network will be dominated by a few, perhaps just one, large participant. and i wrote: the advantages of p p networks arise from a diverse network of small, roughly equal resource contributors. thus it seems that p p networks which have the characteristics needed to succeed (by being widely adopted) also inevitably carry the seeds of their own failure (by becoming effectively centralized). bitcoin is an example of this. my description of the fundamental problem was: the network has to arrange not just that the reward grows more slowly than the contribution, but that it grows more slowly than the cost of the contribution to any participant. if there is even one participant whose rewards outpace their costs, brian arthur's analysis shows they will end up dominating the network. herein lies the rub. the network does not know what an individual participant's costs, or even the average participant's costs, are and how they grow as the participant scales up their contribution. so the network would have to err on the safe side, and make rewards grow very slowly with contribution, at least above a certain minimum size. doing so would mean few if any participants above the minimum contribution, making growth dependent entirely on recruiting new participants. this would be hard because their gains from participation would be limited to the minimum reward. it is clear that mass participation in the bitcoin network was fuelled by the (unsustainable) prospect of large gains for a small investment. the result of limiting reward growth would be a blockchain with limited expenditure on mining which, as we see with the endemic % attacks against alt-coins, would not be secure. but without such limits, economies of scale mean that the blockchain would be dominated by a few large mining pools, so would not be decentralized and would be vulnerable to insider attacks. note that in june the ghash.io mining pool alone had more than % of the bitcoin mining power. but the major current problem for bitcoin, ethereum and cryptocurrencies in general is not vulnerability to % attacks. participants in these "trustless" systems trust that the mining pools are invested in their security and will not conspire to misbehave. events have shown that this trust is misplaced as applied to smaller alt-coins. trustlessness was one of nakamoto's goals, another of the failures. but as regards the major cryptocurrencies this trust is plausible; everyone is making enough golden eggs to preserve the life of the goose. alternatives to proof-of-work the major current problem for cryptocurrencies is that their catastrophic carbon footprint has attracted attention. david gerard writes: the bit where proof-of-work mining uses a country’s worth of electricity to run the most inefficient payment system in human history is finally coming to public attention, and is probably bitcoin’s biggest public relations problem. normal people think of bitcoin as this dumb nerd money that nerds rip each other off with — but when they hear about proof-of-work, they get angry. externalities turn out to matter. yang xiao et al's a survey of distributed consensus protocols for blockchain networks is very useful. they: identify five core components of a blockchain consensus protocol, namely, block proposal, block validation, information propagation, block finalization, and incentive mechanism. a wide spectrum of blockchain consensus protocols are then carefully reviewed accompanied by algorithmic abstractions and vulnerability analyses. the surveyed consensus protocols are analyzed using the five-component framework and compared with respect to different performance metrics. their "wide spectrum" is comprehensive as regards the variety of pow protocols, and as regards the varieties of proof-of-stake (pos) protocols that are the leading alternatives to pow. their coverage of other consensus protocols is less thorough, and as regards the various protocols that defend against sybil attacks by wasting storage instead of computation it is minimal. the main approach to replacing pow with something equally good at preventing sybil attacks but less good at cooking the planet has been pos, but a recent entrant using proof-of-time-and-space (i'll use potas since the acronyms others use are confusing) to waste storage has attracted considerable attention. i will discuss pos in general terms and two specific systems, chia (potas) and subspace (a hybrid of potas and pos). proof-of-stake in pow as implemented by nakamoto, the probability of a winning the next block is proportional to the number of otherwise useless hashes computed — nakamoto thought by individual cpus but now by giant mining pools driven by warehouses full of mining asics. the idea of pos is that the resource being wasted to deter sybil attacks is the cryptocurrency itself. in order to mount a % attack the attacker would have to control more of the cryptocurrency that the loyal peers. in vanilla pos the probability of winning the next block is proportional to the amount of the cryptocurrency "staked", i.e. effectively escrowed and placed at risk of being "slashed" if the majority concludes that the peer has misbehaved. it appears to have been first proposed in by bitcointalk user quantummechanic. the first cryptocurrency to use pos, albeit as a hybrid with pow, was peercoin in . there have been a number of pure pos cryptocurrencies since, including cardano from and algorand from but none have been very successful. ethereum, the second most important cryptocurrency, understood the need to replace pow in and started work in . but as vitalik buterin then wrote: over the last few months we have become more and more convinced that some inclusion of proof of stake is a necessary component for long-term sustainability; however, actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex. the fact that ethereum includes a turing-complete contracting system complicates things further, as it makes certain kinds of collusion much easier without requiring trust, and creates a large pool of stake in the hands of decentralized entities that have the incentive to vote with the stake to collect rewards, but which are too stupid to tell good blockchains from bad. buterin was right about making "certain kinds of collusion much easier without requiring trust". in on-chain vote buying and the rise of dark daos philip daian and co-authors show that "smart contracts" provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. it is obviously much harder to prevent bad behavior in a turing-complete environment. seven years later ethereum is still working on the transition, which they currently don't expect to be complete for another months: shocked to see that the timeline for ethereum moving to eth and getting off proof-of-work mining has been put back to late … about months from now. this is mostly from delays in getting sharding to work properly. vitalik buterin says that this is because the ethereum team isn’t working well together. [tokenist] skepticism about the schedule for eth is well-warranted, as julia magas writes in when will ethereum . fully launch? roadmap promises speed, but history says otherwise: looking at how fast the relevant updates were implemented in the previous versions of ethereum roadmaps, it turns out that the planned and real release dates are about a year apart, at the very minimum. are there other reasons why pos is so hard to implement safely? bram cohen's talk at stanford included a critique of pos: its threat model is weaker than proof of work. just as proof of work is in practice centralized around large mining pools, proof of stake is centralized around large currency holdings (which were probably acquired much more cheaply than large mining installations). the choice of a quorum size is problematic. "too small and it's attackable. too large and nothing happens." and "unfortunately, those values are likely to be on the wrong side of each other in practice." incentivizing peers to put their holdings at stake creates a class of attacks in which peers "exaggerate one's own bonding and blocking it from others." slashing introduces a class of attacks in which peers cause others to be fraudulently slashed. the incentives need to be strong enough to overcome the risks of slashing, and of keeping their signing keys accessible and thus at risk of compromise. "defending against those attacks can lead to situations where the system gets wedged because a split happened and nobody wants to take one for the team" cohen seriously under-played pos's centralization problem. it isn't just that the gini coefficients of cryptocurrencies are extremely high, but that this is a self-reinforcing problem. because the rewards for mining new blocks, and the fees for including transactions in blocks, flow to the hodl-ers in proportion to their hodl-ings, whatever gini coefficient the systems starts out with will always increase. as i wrote, cryptocurrencies are: a mechanism for transferring wealth from later adopters, called suckers, to early adopters, called geniuses. pos makes this "ratchet" mechanism much stronger than pow, and thus renders them much more vulnerable to insider % attacks. i discussed one such high-profile attack by justin sun of tron on the steemit blockchain in proof-of-stake in practice : one week later, on march nd, tron arranged for exchanges, including huobi, binance and poloniex, to stake tokens they held on behalf of their customers in a % attack: according to the list of accounts powered up on march. , the three exchanges collectively put in over million steem power (sp). with an overwhelming amount of stake, the steemit team was then able to unilaterally implement hard fork . to regain their stake and vote out all top community witnesses – server operators responsible for block production – using account @dev as a proxy. in the current list of steem witnesses, steemit and tron’s own witnesses took up the first slots. although this attack didn't provide tron with an immediate monetary reward, the long term value of retaining effective control of the blockchain was vastly greater than the cost of staking the tokens. i've been pointing out that the high gini coefficients of cryptocurrencies means proof-of-stake centralizes control of the blockchain in the hands of the whales since 's why decentralize? quoted vitalik buterin pointing out that a realistic scenario was: in a proof of stake blockchain, % of the coins at stake are held at one exchange. or in this case three exchanges cooperating. note that economic analyses of pos, such as more (or less) economic limits of the blockchain by joshua gans and neil gandal, assume economically rational actors care about the iliquidity of staked coins and the foregone interest. but true believers in "number go up" have a long-term perspective similar to sun's. the eventual progress of their coin "to the moon!" means that temporary, short-term costs are irrelevant to long-term hodl-ers. jude c. nelson amplifies the centralization point: pow is open-membership, because the means of coin production are not tied to owning coins already. all you need to contribute is computing power, and you can start earning coins at a profit. pos is closed-membership with a veneer of open-membership, because the means of coin production are tied to owning a coin already. what this means in practice is that no rational coin-owner is going to sell you coins at a fast enough rate that you'll be able to increase your means of coin production. put another way, the price you'd pay for the increased means of coin production will meet or exceed the total expected revenue created by staking those coins over their lifetime. so unless you know something the seller doesn't, you won't be able to profit by buying your way into staking. overall, this makes pos less resilient and less egalitarian than pow. while both require an up-front capital expenditure, the expenditure for pos coin-production will meet or exceed the total expected revenue of those coins at the point of sale. so, the system is only as resilient as the nodes run by the people who bought in initially, and the only way to join later is to buy coins from people who want to exit (which would only be viable if these folks believed the coins are worth less than what you're buying them for, which doesn't bode well for you as the buyer). nelson continues: pow requires less proactive trust and coordination between community members than pos -- and thus is better able to recover from both liveness and safety failures -- precisely because it both ( ) provides a computational method for ranking fork quality, and ( ) allows anyone to participate in producing a fork at any time. if the canonical chain is %-attacked, and the attack eventually subsides, then the canonical chain can eventually be re-established in-band by honest miners simply continuing to work on the non-attacker chain. in pos, block-producers have no such protocol -- such a protocol cannot exist because to the rest of the network, it looks like the honest nodes have been slashed for being dishonest. any recovery procedure necessarily includes block-producers having to go around and convince people out-of-band that they were totally not dishonest, and were slashed due to a "hack" (and, since there's lots of money on the line, who knows if they're being honest about this?). pos conforms to mark : : for he that hath, to him shall be given: and he that hath not, from him shall be taken even that which he hath. in section vi(e) yang xiao et al identify the following types of vulnerability in pos systems: costless simulation: literally means any player can simulate any segment of blockchain history at the cost of no real work but speculation, as pos does not incur intensive computation while the blockchain records all staking history. this may give attackers shortcuts to fabricate an alternative blockchain. it is the basis for attacks through . nothing at stake unlike a pow miner, a pos minter needs little extra effort to validate transactions and generate blocks on multiple competing chains simultaneously. this “multi-bet” strategy makes economical sense to pos nodes because by doing so they can avoid the opportunity cost of sticking to any single chain. consequently if a significantly fraction of nodes perform the “multi-bet” strategy, an attacker holding far less than % of tokens can mount a successful double spending attack. the defense against this attack is usually "slashing", forfeiting the stake of miners detected on multiple competing chains. but slashing, as cohen and nelson point out, is in itself a consensus problem. posterior corruption the key enabler of posterior corruption is the public availability of staking history on the blockchain, which includes stakeholder addresses and staking amounts. an attacker can attempt to corrupt the stakeholders who once possessed substantial stakes but little at present by promising them rewards after growing an alternative chain with altered transaction history (we call it a “malicious chain”). when there are enough stakeholders corrupted, the colluding group (attacker and corrupted once-rich stakeholders) could own a significant portion of tokens (possibly more than %) at some point in history, from which they are able to grow an malicious chain that will eventually surpass the current main chain. the defense is key-evolving cryptography, which ensures that the past signatures cannot be forged by the future private keys. long-range attack as introduced by buterin: foresees that a small group of colluding attackers can regrow a longer valid chain that starts not long after the genesis block. because there were likely only a few stakeholders and a lack of competition at the nascent stage of the blockchain, the attackers can grow the malicious chain very fast and redo all the pos blocks (i.e. by costless simulation) while claiming all the historical block rewards. evangelos deirmentzoglou et al's a survey on long-range attacks for proof of stake protocols provides a useful review of these attacks. even if there are no block rewards, only fees, a variant long-range attack is possible as described in stake-bleeding attacks on proof-of-stake blockchains by peter gazi et al, and by shijie zhang and jong-hyouk lee in eclipse-based stake-bleeding attacks in pos blockchain systems. stake-grinding attack unlike pow in which pseudo-randomness is guaranteed by the brute-force use of a cryptographic hash function, pos’s pseudo-randomness is influenced by extra blockchain information—the staking history. malicious pos minters may take advantage of costless simulation and other staking-related mechanisms to bias the randomness of pos in their own favor, thus achieving higher winning probabilities compared to their stake amounts centralization risk as discussed above: in pos the minters can lawfully reinvest their profits into staking perpetually, which allows the one with a large sum of unused tokens become wealthier and eventually reach a monopoly status. when a player owns more than % of tokens in circulation, the consensus process will be dominated by this player and the system integrity will not be guaranteed. there are a number of papers on this problem, including staking pool centralization in proof-of-stake blockchain network by ping he et al, compounding of wealth in proof-of-stake cryptocurrencies by giulia fanti et al, and stake shift in major cryptocurrencies: an empirical study by rainer stütz et al. but to my mind none of them suggest a realistic mitigation. these are not the only problems from which pos suffers. two more are: checkpointing. long-range and related attacks are capable of rewriting almost the entire chain. to mitigate this, pos systems can arrange for consensus on checkpoints, blocks which are subsequently regarded as canonical forcing any rewriting to start no earlier than the following block. winkle – decentralised checkpointing for proof-of-stake is: a decentralised checkpointing mechanism operated by coin holders, whose keys are harder to compromise than validators’ as they are more numerous. by analogy, in bitcoin, taking control of one-third of the total supply of money would require at least keys, whereas only mining pools control more than half of the hash power it is important that consensus on checkpoints is achieved through a different mechanism than consensus on blocks. to over-simplify, winkle piggy-backs votes for checkpoints on transactions; a transaction votes for a block with the number of coins remaining in the sending account, and with the number sent to the receiving account. a checkpoint is final once a set proportion of the coins have voted for it. for the details, see winkle: foiling long-range attacks in proof-of-stake systems by sarah azouvi et al. lending. in competitive equilibria between staking and on-chain lending, tarun chitra demonstrates that it is: possible for on-chain lending smart contracts to cannibalize network security in pos systems. when the yield provided by these contracts is more attractive than the inflation rate provided from staking, stakers will tend to remove their staked tokens and lend them out, thus reducing network security. ... our results illustrate that rational, non-adversarial actors can dramatically reduce pos network security if block rewards are not calibrated appropriately above the expected yields of on-chain lending. i believe this is part of a fundamental problem for pos. the token used to prevent a single attacker appearing as a multitude of independent peers can be lent, and thus the attacker can borrow a temporary majority of the stake cheaply, for only a short-term interest payment. preventing this increases implementation complexity significantly. in summary, despite pos' potential for greatly reducing pow's environmental impact and cost of defending against sybil attacks, it has a major disadvantage. it is significantly more complex and thus its attack surface is much larger, especially when combined with a turing-complete execution environment such as ethereum's. it therefore needs more defense mechanisms, which increase complexity further. buterin and the ethereum developers realize the complexity of the implementation task they face, which is why their responsible approach is taking so long. currently ethereum is the only realistic candidate to displace bitcoin, and thus reduce cryptocurrencies' carbon footprint, so the difficulty of an industrial-strength implementation of pos for ethereum . is a major problem. proof-of-space-and-time back in i wrote about bram cohen's potas system, chia, in proofs of space and chia network. instead of wasting computation to prevent sybil attacks, chia wastes storage. chia's "space farmers" create and store "plots" consisting of large amounts of otherwise useless data. the technical details are described in chia consensus. they are comprehensive and impressively well thought out. because, like bitcoin, chia is wasting a real resource to defend against sybil attacks it lacks many of pos' vulnerabilities. nevertheless, the chia protocol is significantly more complex than bitcoin and thus likely to possess additional vulnerabilities. for example, whereas in bitcoin there is only one role for participants, mining, the chia protocol involves three roles: farmer, "farmers are nodes which participate in the consensus algorithm by storing plots and checking them for proofs of space." timelord, "timelords are nodes which participate in the consensus algorithm by creating proofs of time". full node, which involves "broadcasting proofs of space and time, creating blocks, maintaining a mempool of pending transactions, storing the historical blockchain, and uploading blocks to other full nodes as well as wallets (light clients)." figure another added complexity is that the chia protocol maintains three chains (challenge, reward and foliage), plus an evanescent chain during each "slot" (think bitcoin's block time), as shown in the document's figure . the document therefore includes a range of attacks and their mitigations which are of considerable technical interest. cohen's praiseworthy objective for chia was to avoid the massive power waste of pow because: "you have this thing where mass storage medium you can set a bit and leave it there until the end of time and its not costing you any more power. dram is costing you power when its just sitting there doing nothing". alas, cohen was exaggerating: a state-of-the-art disk drive, such as seagate's tb barracuda pro, consumes about w spun-down in standby mode, about w spun-up idle and about w doing random k reads. which is what it would be doing much of the time while "space farming". clearly, potas uses energy, just much less than pow. reporting on cohen's talk at stanford i summarized: cohen's vision is of a posp/vdf network comprising large numbers of desktop pcs, continuously connected and powered up, each with one, or at most a few, half-empty hard drives. the drives would have been purchased at retail a few years ago. my main criticism in those posts was cohen's naiveté about storage technology, the storage market and economies of scale: there would appear to be three possible kinds of participants in a pool: individuals using the spare space in their desktop pc's disk. the storage for the proof of space is effectively "free", but unless these miners joined pools, they would be unlikely to get a reward in the life of the disk. individuals buying systems with cpu, ram and disk solely for mining. the disruption to the user's experience is gone, but now the whole cost of mining has to be covered by the rewards. to smooth out their income, these miners would join pools. investors in data-center scale mining pools. economies of scale would mean that these participants would see better profits for less hassle than the individuals buying systems, so these investor pools would come to dominate the network, replicating the bitcoin pool centralization. thus if chia's network were to become successful, mining would be dominated by a few large pools. each pool would run a vdf server to which the pool's participants would submit their proofs of space, so that the pool manager could verify their contribution to the pool. the emergence of pools, and dominance of a small number of pools, has nothing to do with the particular consensus mechanism in use. thus i am skeptical that alternatives to proof of work will significantly reduce centralization of mining in blockchains generally, and in chia network's blockchain specifically. as i was writing the first of these posts, techcrunch reported: chia has just raised a $ . million seed round led by angellist’s naval ravikant and joined by andreessen horowitz, greylock and more. the money will help the startup build out its chia coin and blockchain powered by proofs of space and time instead of bitcoin’s energy-sucking proofs of work, which it plans to launch in q . even in the naiveté persisted, as chia pitched the idea that space farming on a raspberry pi was a way to make money. it still persists, as chia's president reportedly claims that "recyclable hard drives are entering the marketplace". but when chia coin actually started trading in early may the reality was nothing like cohen's vision: as everyone predicted, the immediate effect was to create a massive shortage of the ssds needed to create plots, and the hard drives needed to store them. even gene hoffman, chia's ceo, admitted that bitcoin rival chia 'destroyed' hard disc supply chains, says its boss: chia, a cryptocurrency intended to be a “green” alternative to bitcoin has instead caused a global shortage of hard discs. gene hoffman, the president of chia network, the company behind the currency, admits that “we’ve kind of destroyed the short-term supply chain”, but he denies it will become an environmental drain. the result of the spike in storage prices was a rise in the vendors stock: the share price of hard disc maker western digital has increased from $ at the start of the year to $ , while competitor seagate is up from $ to $ over the same period. to give you some idea of how rapidly chia has consumed storage in the two months since launch, it is around % of the rate at which the entire industry produced hard disk in . chia pools mining pools arose. as i write the network is storing . eb of otherwise useless data, of which one pool, ihpool.com is managing . eb, or . %. unlike bitcoin, the next two pools are much smaller, but large enough so that the top four pools have % of the space. the network is slightly more decentralized than bitcoin has been since , and for reasons discussed below is less vulnerable to an insider % attack. chia "price" the "price" of chia coin collapsed, from $ . at the start of trading to $ . sunday before soaring to $ . as i write. each circulating xch corresponds to about tb. the investment in "space farming" hardware vastly outweighs, by nearly six times, the market cap of the cryptocurrency it is supporting. the "space farmers" are earning $ . m/day, or about $ /tb/year. a tb internal drive is currently about $ on amazon, so it will be about a months before it earns a profit. the drive is only warranted for years. but note that the warranty is limited: supports up to tb/yr workload rate workload rate is defined as the amount of user data transferred to or from the hard drive. using the drive for "space farming" would likely void the warranty and, just as pow does to gpus, burn out the drive long before its warranted life. if you have two years, the $ investment theoretically earns a % return before power and other costs. but the hard drive isn't the only cost of space farming. in order to become a "space farmer" in the first place you need to create plots containing many gigabytes of otherwise useless cryptographically-generated data. you need lots of them; the probability of winning your share of the $ . m/day is how big a fraction of the nearly eb you can generate and store. the eb is growing rapidly, so the quicker you can generate the plots, the better your chance in the near term. to do so in finite time you need in addition to the hard drive a large ssd at extra cost. using it for plotting will void its warranty and burn it out in as little as six weeks. and you need a powerful server running flat-out to do the cryptography, which both rather casts doubt on how much less power than pow chia really uses, and increases the payback time significantly. in my first chia post i predicted that "space farming" would be dominated by huge data centers such as amazon's. sure enough, wolfie zhao reported on may th that: technology giant amazon has rolled out a solution dedicated to chia crypto mining on its aws cloud computing platform. according to a campaign page on the amazon aws chinese site, the platform touts that users can deploy a cloud-based storage system in as quickly as five minutes in order to mine xch, the native cryptocurrency on the chia network. two weeks later david gerard reported that: the page disappeared in short order — but an archive exists. because chia mining trashes the drives, something else i pointed out in my first chia post, storage services are banning users who think that renting something is a license to destroy it. in any case, tb of amazon's s reduced redundancy storage costs $ . /day, so it would be hard to make ends meet. cheaper storage services, such as wasabi at $ . /day are at considerable risk from chia. although this isn't an immediate effect, as david gerard writes, because creating chia plots wears out ssds, and chia farming wears out hard disks: chia produces vast quantities of e-waste—rare metals, assembled into expensive computing components, turned into toxic near-unrecyclable landfill within weeks. miners are incentivized to join pools because they prefer a relatively predictable, frequent flow of small rewards to very infrequent large rewards. the way pools work in bitcoin and related protocols is that the pool decides what transactions are in the block it hopes to mine, and gets all the pool participants to work on that block. thus a pool, or a conspiracy among pools, that had % of the mining power would have effective control over the transactions that were finalized. because they make the decision as to which transactions happen, nicholas weaver argues that mining pools are money transmitters and thus subject to the aml/kyc rules. but in chia pools work differently: first and foremost, even when a winning farmer is using a pool, they themselves are the ones who make the transaction block - not the pool. the decentralization benefits of this policy are obvious. the potential future downside is that while bitcoin miners in a pool can argue that aml/kyc is the responsibility of the pool, chia farmers would be responsible for enforcing the aml/kyc rules and subject to bank-sized penalties for failing to do so. in bitcoin the winning pool receives and distributes both the block reward and the (currently much smaller) transaction fees. over time the bitcoin block reward is due to go to zero and the system is intended to survive on fees alone. alas, research has shown that a fee-only bitcoin system is insecure. chia does things differently in two ways. first: all the transaction fees generated by a block go to the farmer who found it and not to the pool. trying to split the transaction fees with the pool could result in transaction fees being paid ‘under the table’ either by making them go directly to the farmer or making an anyone can spend output which the farmer would then pay to themselves. circumventing the pool would take up space on the blockchain. it could also encourage the emergence of alternative pooling protocols where the pool makes the transaction block which is a form of centralization we wish to avoid. the basic argument is that in bitcoin the % conspiracy is n pools where in chia it is m farmers (m ≫ n). chia are confident that this is safe: this ensures that even if a pool has % netspace, they would also need to control all of the farmer nodes (with the % netspace) to do any malicious activity. this will be very difficult unless all the farmers (with the % netspace) downloaded the same malicious chia client programmed by a bram like level genius. i'm a bit less confident because, like ethereum, chia has a turing-complete programming environment. in on-chain vote buying and the rise of dark daos philip daian and co-authors showed that "smart contracts" provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. although their conspriacies were much smaller, similar techniques might be the basis for larger attacks on blockchains with "smart contracts". second: this method has the downside of reducing the smoothing benefits of pools if transaction fees come to dominate fixed block rewards. that’s never been a major issue in bitcoin and our block reward schedule is set to only halve three times and continue at a fixed amount forever after. there will alway be block rewards to pay to the pool while transaction fees go to the individual farmers. so unlike the austrian economics of bitcoin, chia plans to reward farming by inflating the currency indefinitely, never depending wholly on fees. in bitcoin the pool takes the whole block reward, but the way block rewards work is different too: fixed block rewards are set to go / to the pool and / to the farmer. this seems to be a sweet spot where it doesn’t reduce smoothing all that much but also wipes out potential selfish mining attacks where someone joins a competing pool and takes their partials but doesn’t upload actual blocks when they find them. those sort of attacks can become profitable when the fraction of the split is smaller than the size of the pool relative to the whole system. last i checked ihpool.com had almost % of the total system. rational economics are not in play here. "space farming" makes sense only at scale or for the most dedicated believers in "number go up". others are less than happy: so i tested this chia thing overnight. gave it gb plot and two cpu threads. after hours it consumed gb temp space, didn’t sync yet, cpu usage is always %+. estimated reward time is months. this isn’t green, already being centralised on large waste producing servers. the problem for the "number go up" believers is that the "size go up" too, by about half-an-exabyte a day. as the network grows, the chance that your investment in hardware will earn a reward goes down because it represents a smaller proportion of the total. unless "number go up" much faster than "size go up", your investment is depreciating rapidly not just because you are burning it out but because its cost-effectiveness is decaying. and as we see, "size go up" rapidly but "number go down" rapidly. and economies of scale mean that return on investment in hardware will go up significantly with the proportion of the total the farmer has. so the little guy gets the short end of the stick even if they are in a pool. chia's technology is extremely clever, but the economics of the system that results in the real world don't pass the laugh test. chia is using nearly a billion dollars of equipment being paid for by inflating the currency at a rate of currently / billion dollars a year to process transactions at a rate around five billion dollars a year, a task that could probably be done using a conventional database and a raspberry pi. the only reason for this profligacy is to be able to claim that it is "decentralized". it is more decentralized than pow or pos systems, but over time economies of scale and free entry will drive the reward for farming in fiat terms down and mean that small-scale farmers will be squeezed out. the chia "price" chart suggests that it might have been a "list-and-dump" scheme, in which a z and the other vcs incentivized the miners to mine and the exchanges to list the new cryptocurrency so that the vcs could dump their hodl-ings on the muppets seduced by the hype and escape with a profit. note that a z just raised a $ . b fund dedicated to pouring money into similar schemes. this is enough to fund chia-sized ventures! (david gerard aptly calls andreesen horowitz "the softbank of crypto") they wouldn't do that unless they were making big bucks from at least some of the ones they funded earlier. chia's sensitivity about their pr led them to hurl bogus legal threats at the leading chia community blog. neither is a good look. subspace as we see, the chia network has one huge pool and a number of relatively miniscule pools. in subspace" a solution to the farmer's dilemma, wagstaff describes the "farmer's dilemma" thus: observe that in any poc blockchain a farmer is, by-definition, incentivized to allocate as much of its scarce storage resources as possible towards consensus. contrast this with the desire for all full nodes to reserve storage for maintaining both the current state and history of the blockchain. these competing requirements pose a challenge to farmers: do they adhere to the desired behavior, retaining the state and history, or do they seek to maximize their own rewards, instead dedicating all available space towards consensus? when faced with this farmer’s dilemma rational farmers will always choose the latter, effectively becoming light clients, while degrading both the security and decentralization of the network. this implies that any poc blockchain would eventually consolidate into a single large farming pool, with even greater speed than has been previously observed with pow and pos chains. subspace proposes to resolve this using a hybrid of pos and potas: we instead clearly distinguish between a permissionless farming mechanism for block production and permissioned staking mechanism for block finalization. wagstaff describes it thus: to prevent farmers from discarding the history, we construct a novel poc consensus protocol based on proofs-of-storage of the history of the blockchain itself, in which each farmer stores as many provably-unique replicas of the chain history as their disk space allows. to ensure the history remains available, farmers form a decentralized storage network, which allows the history to remain fully-recoverable, load-balanced, and efficiently-retrievable. to relieve farmers of the burden of maintaining the state and preforming [sic] redundant computation, we apply the classic technique in distributed systems of decoupling consensus and computation. farmers are then solely responsible for the ordering of transactions, while a separate class of executor nodes maintain the state and compute the transitions for each new block. to ensure executors remain accountable for their actions, we employ a system of staked deposits, verifiable computation, and non-interactive fraud proofs. separating consensus (potas) and computation (pos) has interesting effects: like chia, the only function of pools is to smooth out farmer's rewards. they do not compose the blocks. pools will compete on their fees. economics of scale mean that the larger the pool, the lower the fees it can charge. so, just like chia, subspace will end up with one, or only a few, large pools. like chia, if they can find a proof, farmers assemble transactions into a block which they can submit to executors for finalization. subspace shares with chia the property that a % attack requires m farmers not n pools (m ≫ n), assuming of course no supply chain attack or abuse of "smart contracts". subspace uses a lockss-like technique of electing a random subset of executors for each finalization. because any participant can unambiguously detect fraudulent execution, and thus that the finalization of a block is fraudulent, the opportunity for bad behavior by executors is highly constrained. a conspiracy of executors has to hope that no honest executor is elected. like chia, the technology is extremely clever but there are interesting economic aspects. as regards farmers, wagstaff writes: to ensure the history does not grow beyond total network storage capacity, we modify the transaction fee mechanism such that it dynamically adjusts in response to the replication factor. recall that in bitcoin, the base fee rate is a function of the size of the transaction in bytes, not the amount of btc being transferred. we extend this equation by including a multiplier, derived from the replication factor. this establishes a mandatory minimum fee for each transaction, which reflects its perpetual storage cost. the multiplier is recalculated each epoch, from the estimated network storage and the current size of the history. the higher the replication factor, the cheaper the cost of storage per byte. as the replication factor approaches one, the cost of storage asymptotically approaches infinity. as the replication factor decreases, transaction fees will rise, making farming more profitable, and in-turn attracting more capacity to the network. this allows the cost of storage to reach an equilibrium price as a function of the supply of, and demand for, space. there are some issues here: the assumption that the market for fees can determine the "perpetual storage cost" is problematic. as i first showed back in , the endowment needed for "perpetual storage" depends very strongly on two factors that are inherently unpredictable, the future rate of decrease of media cost in $/byte (kryder rate), and the future interest rate. the invisible hand of the market for transaction fees cannot know these, it only knows the current cost of storage. nor can subspace management know them, to set the "mandatory minimum fee". thus it is likely that fees will significantly under-estimate the "perpetual storage cost", leading to problems down the road. the assumption that those wishing to transact will be prepared to pay at least the "mandatory minimum fee" is suspect. cryptocurrency fees are notoriously volatile because they are based on a blind auction; when no-one wants to transact a "mandatory minimum fee" would be a deterrent, when everyone wants to fees are unaffordable. research has shown that if fees dominate block rewards systems become unstable. wagstaff's paper doesn't seem to describe how block rewards work; i assume that they go to the individual farmer or are shared via a pool for smoother cash flow. i couldn't see from the paper whether, like chia, subspace intends to avoid depending upon fees. as regards executors: for each new block, a small constant number of executors are chosen through a stake-weighted election. anyone may participate in execution by syncing the state and placing a small deposit. but the chance that they will be elected and gain the reward for finalizing a block and generating an execution receipt (er) depends upon how much they stake. the mechanism for rewarding executors is: farmers split transaction fee rewards evenly with all executors, based on the expected number of ers for each block. for example, if executors are elected, the farmer will take half of the all transaction fees, while each executor will take / . a farmer is incentivized to include all ers which finalize execution for its parent block because doing so will allow it to claim more of its share of the rewards for its own block. for example, if the farmer only includes out of expected ers, it will instead receive / (not / ) of total rewards, while each of the executors will still receive / . any remaining shares will then be escrowed within a treasury account under the control of the community of token holders, with the aim of incentivizing continued protocol development. although the role of executor demands significant resources, both in hardware and in staked coins, these rewards seem inadequate. every executor has to execute the state transitions in every block. but for each block only a small fraction of the executors receive only, in the example above, / of the fees. note also footnote : we use this rate for explanatory purposes, while noting that in order to minimize the plutocratic nature of pos, executor shares should be smaller in practice. so wagstaff expects that an executor will receive only a small fraction of a small fraction of / of the transaction fees. even supposing the stake distribution among executors was small and even, unlikely in practice, for the random election mechanism to be effective there need to be many times executors. for example, if there are executors, and executors share / of the fees, each can expect around . % of the fees. bitcoin currently runs with fees less than % of the block rewards. if subspace had the same split in my example executors as a class would expect around . % of the block rewards, with farmers as a class receiving % of the block rewards plus . % of the fees. there is another problem — the notorious volatility of transaction fees set against the constant cost of running an executor. much of the time there would be relatively low demand for transactions, so a block would contain relatively few transactions that each offered the mandatory minimum fee. unless the fees, and especially the mandatory minimum fee, are large relative to the block reward it isn't clear why executors would participate. but fees that large would risk the instability of fee-only blockchains. there are two other roles in subspace, verifiers and full nodes. as regards incentivizing verifiers: we rely on the fact that all executors may act as verifiers at negligible additional cost, as they are already required to maintain the valid state transitions in order to propose new ers. if we further require them to reveal fraud in order to protect their own stake and claim their share of the rewards, in the event that they themselves are elected, then we can provide a more natural solution to the verifier’s dilemma. as regards incentivizing full nodes, wagstaff isn't clear. in addition to executors, any full node may also monitor the network and generate fraud proofs, by virtue of the fact that no deposit is required to act as verifier. as i read the paper, full nodes have similar hardware requirements as executors but no income stream to support them unless they are executors too. overall, subspace is interesting. but the advantage from a farmer's point of view of subspace over chia is that their whole storage resource is devoted to farming. everything else is not really significant, and all this would be dominated by a fairly small difference in "price". add to that the fact that chia has already occupied the market niche for new potas systems, and has high visibility via bram cohen and a z, and the prospects for subspace don't look good. if subspace succeeds, economies of scale will have two effects: large pools will dominate small pools because they can charge smaller fees. large farmers will dominate small farmers because their rewards are linear in the resource they commit, but their costs are sub-linear, so their profit is super-linear. this will likely result in the most profitable, hassle-free way for smaller consumers to participate being investing in a pool rather than actually farming. conclusion the overall theme is that permissionless blockchains have to make participating in consensus expensive in some way to defend against sybils. thus if you are expending an expensive resource economies of scale are an unavoidable part of sybil defense. if you want to be "decentralized" to avoid % attacks from insiders you have to have some really powerful mechanism pushing back against economies of scale. i see three possibilities, either the blockchain protocol designers: don't understand why successful cryptocurrencies are centralized, so don't understand the need to push back on economies of scale. do understand the need to push back on economies of scale but can't figure out how to do it. it is true that figuring this out is incredibly difficult, but their response should be to say "if the blockchain is going to end up centralized, why bother wasting resources trying to be permissionless?" not to implement something they claim is decentralized when they know it won't be. don't care about decentralization, they just want to get rich quick, and are betting it will centralize around them. in most cases, my money is on # . at least both chia and subspace have made efforts to defuse the worst aspects of centralization. posted by david. at : am labels: bitcoin, p p comments: david. said... chris dupres, who i should have acknowledged provided me with a key hint about chia, posted very detailed post on post-proof of work cryptocurrencies, including chia linking here. thanks, chris! july , at : pm david. said... brett scott's i, token is an excellent if lengthy explanation of why bitcoin and other currencies lack "moneyness". july , at : pm blissex said... «being swamped by a multitude of sybil peers [...] a rational majority of peers from conspiring to obtain inappropriate benefits. [...] a rational minority of peers from conspiring to obtain inappropriate benefits.» this analysis ids very interesting and seems well done to me, but it is orthogonal to what seems to be the economic appeal of bitcoin, which is not that it is fair, decentralized, trustworthy, but the perception that: * it is pseudonymous and does not have "know my customer" rules. * it is worldwide so it is not subject to capital export/import restrictions. * there is a limited number of bitcoins, so it is "guaranteed" that as demand grows the worth of each bitcoin is going to grow a lot. bitcoin in other words is perceived as a limited-edition, pseudonymous collectible; it is considered as the convenient alternative to a pouch of diamonds for "informal" payments. considering that international "informal" payments often involve a - % "cleanup" fee, users of bitcoin for "informal" payments don't worry too much about issues like decentralization or perfect fairness and perfect trustworthiness, as long as the bitcoin operators steal less than - % it is just a cost of doing business. july , at : am david. said... this post wasn't about bitcoin, or the reasons people use it. it was about the design of consensus protocols for permissionless peer-to-peer systems. and, to your point, the use of btc for "payments" is negligible compared to its use for speculation. the use-case for btc hodl-ers is "to the moon", and the use-case for btc traders is its volatility and ability to be pumped-and-dumped. july , at : am blissex said... «this post wasn't about bitcoin, or the reasons people use it. it was about the design of consensus protocols for permissionless peer-to-peer systems.» indeed, as i wrote your post is “orthogonal to what seems to be the economic appeal of bitcoin”. but i think that they are relevant: the question is whether “consensus protocols for permissionless peer-to-peer systems” matter if the economic incentives are not aligned with them. your post also mentioned repeatedly “the multiple failures of satoshi nakamoto's goals for bitcoin”, which goals were economic, and accordingly you also mention as relevant “the economic forces” for those consensus protocols. i am sorry that i was not clear as to why i was writing about economic forces orthogonally to consensus protocol, given that your post also seemed to consider them relevant. «the use of btc for "payments" is negligible compared to its use for speculation. the use-case for btc hodl-ers is "to the moon", and the use-case for btc traders is its volatility and ability to be pumped-and-dumped» indeed currently, but again you referred several times to “satoshi nakamoto's goals for bitcoin”, and those were about a payment system. also the current minority of users of bitcoin and other "coins" who use it as a "don't know your customer" alternative to westernunion or "hawala" remittances are not irrelevant, because that use of "coins" could be long term. when doing "don't know your customer" transfers, they could be denominated in etruscan guineas or mayan jiaozis while in transit, and "coins" are just then a unit of account, and it is for that main purpose that a “consensus protocol” was designed by satoshi nakamoto (even if it turned out not to be that suitable). maybe you think that these considerations are very secondary, but i wrote about them because i think in time they will matter. july , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ▼  july ( ) economics of evil revisited yet another dna storage technique alternatives to proof-of-work a modest proposal about ransomware intel did a boeing graphing china's cryptocurrency crackdown ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. logging uri query params with lograge – bibliographic wilderness skip to content bibliographic wilderness menu about contact logging uri query params with lograge jrochkind general august , august , the lograge gem for taming rails logs by default will lot the path component of the uri, but leave out the query string/query params. for instance, perhaps you have a url to your app /search?q=libraries. lograge will log something like: method=get path=/search format=html… the q=libraries part is completely left out of the log. i kinda want that part, it’s important. the lograge readme provides instructions for “logging request parameters”, by way of the params hash. i’m going to modify them a bit slightly to: use the more recent custom_payload config instead of custom_options. (i’m not certain why there are both, but i think mostly for legacy reasons and newer custom_payload? is what you should read for?) if we just put params in there, then a bunch of ugly "foo"} ok. the params hash isn’t exactly the same as the query string, it can include things not in the url query string (like controller and action, that we have to strip above, among others), and it can in some cases omit things that are in the query string. it just depends on your routing and other configuration and logic. the params hash itself is what default rails logs… but what if we just log the actual url query string instead? benefits: it’s easier to search the logs for actually an exact specific known url (which can get more complicated like /search?q=foo&range% byear_facet_isim% d% bbegin% d= &source=foo or something). which is something i sometimes want to do, say i got a url reported from an error tracking service and now i want to find that exact line in the log. i actually like having the exact actual url (well, starting from path) in the logs. it’s a lot simpler, we don’t need to filter out controller/action/format/id etc. it’s actually a bit more concise? and part of what i’m dealing with in general using lograge is trying to reduce my bytes of logfile for papertrail! drawbacks? if you had some kind of structured log search (i don’t at present, but i guess could with papertrail features by switching to json format?), it might be easier to do something like “find a /search with q=foo and source=ef without worrying about other params) to the extent that params hash can include things not in the actual url, is that important to log like that? ….? curious what other people think… am i crazy for wanting the actual url in there, not the params hash? at any rate, it’s pretty easy to do. note we use filtered_path rather than fullpath to again take account of rails parameter filtering, and thanks again /u/ezekg: config.lograge.custom_payload do |controller| { path: controller.request.filtered_path } end this is actually overwriting the default path to be one that has the query string too: method=get path=/search?q=libraries format=html ... you could of course add a different key fullpath instead, if you wanted to keep path as it is, perhaps for easier collation in some kind of log analyzing system that wants to group things by same path invariant of query string. i’m gonna try this out! meanwhile, on lograge… as long as we’re talking about lograge…. based on commit history, history of issues and pull requests… the fact that ci isn’t currently running (travis.org grr) and doesn’t even try to test on rails . + (although lograge seems to work fine)… one might worry that lograge is currently un/under-maintained…. no comment on a gh issue filed in may asking about project status. it still seems to be one of the more popular solutions to trying to tame rails kind of out of control logs. it’s mentioned for instance in docs from papertrail and honeybadger, and many many other blog posts. what will it’s future be? looking around for other possibilties, i found semantic_logger (rails_semantic_logger). it’s got similar features. it seems to be much more maintained. it’s got a respectable number of github stars, although not nearly as many as lograge, and it’s not featured in blogs and third-party platform docs nearly as much. it’s also a bit more sophisticated and featureful. for better or worse. for instance mainly i’m thinking of how it tries to improve app performance by moving logging to a background thread. this is neat… and also can lead to a whole new class of bug, mysterious warning, or configuration burden. for now i’m sticking to the more popular lograge, but i wish it had ci up that was testing with rails . , at least! incidentally, trying to get rails to log more compactly like both lograge and rails_semantic_logger do… is somewhat more complicated than you might expect, as demonstrated by the code in both projects that does it! especially semantic_logger is hundreds of lines of somewhat baroque code split accross several files. a refactor of logging around rails (i think?) to use activesupport::logsubscriber made it possible to customize rails logging like this (although i think both lograge and rails_semantic_logger still do some monkey-patching too!), but in the end didn’t make it all that easy or obvious or future-proof. this may discourage too many other alternatives for the initial primary use case of both lograge and rails_semantic_logger — turn a rails action into one log line, with a structured format. share this: twitter facebook tagged ruby published by jrochkind view all posts by jrochkind published august , august , post navigation previous post notes on cloudfront in front of rails assets on heroku, with cors leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (required) (address never made public) name (required) website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. bibliographic wilderness is a blog by jonathan rochkind about digital library services, ruby, and web development. contact search for: email subscription enter your email address to subscribe to this blog and receive notifications of new posts by email. join other followers email address: subscribe recent posts logging uri query params with lograge august , notes on cloudfront in front of rails assets on heroku, with cors june , activesupport::cache via activerecord (note to self) june , heroku release phase, rails db:migrate, and command failure june , code that lasts: sustainable and usable open source code march , archives archives select month august  ( ) june  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) april  ( ) march  ( ) december  ( ) october  ( ) september  ( ) august  ( ) june  ( ) april  ( ) march  ( ) february  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) june  ( ) may  ( ) march  ( ) february  ( ) january  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) feeds  rss - posts  rss - comments recent comments the return of the semantic web? – deeply semantic on “is the semantic web still a thing?” jrochkind on rails auto-scaling on heroku adam (rails autoscale) on rails auto-scaling on heroku on catalogers, programmers, and user tasks – gavia libraria on broad categories from class numbers replacing marc – gavia libraria on linked data caution jrochkind on deep dive: moving ruby projects from travis to github actions for ci jrochkind on deep dive: moving ruby projects from travis to github actions for ci jrochkind on deep dive: moving ruby projects from travis to github actions for ci top posts purposes/functions of controlled vocabulary bootstrap to : changes in how font size, line-height, and spacing is done. or "what happened to $line-height-computed." yes, product owner and technical lead need to be different people logging uri query params with lograge some notes on what's going on in activestorage top clicks blog.travis-ci.com/ - … aws.amazon.com/premiumsup… kunststube.net/encoding searchstax.com/docs/staxa… github.com/twbs/bootstrap… a blog by jonathan rochkind. all original content licensed cc-by. create a website or blog at wordpress.com privacy & cookies: this site uses cookies. by continuing to use this website, you agree to their use. to find out more, including how to control cookies, see here: cookie policy dshr's blog: falling research productivity dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, april , falling research productivity are ideas getting harder to find? by nicholas bloom et al looks at the history of investment in r&d and its effect on the product across several industries. their main example is moore's law, and they show that [page ]: research effort has risen by a factor of since . this increase occurs while the growth rate of chip density is more or less stable: the constant exponential growth implied by moore’s law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward. assuming a constant growth rate for moore’s law, the implication is that research productivity has fallen by this same factor of , an average rate of . percent per year. if the null hypothesis of constant research productivity were correct, the growth rate underlying moore’s law should have increased by a factor of as well. instead, it was remarkably stable. put differently, because of declining research productivity, it is around times harder today to generate the exponential growth behind moore’s law than it was in . below the fold, some commentary on this and other relevant research. actually, of course, in recent years moore's law has slowed as the technology gets closer and closer to the physical limits. this slowing increases the rate at which research productivity falls. the implications of their finding are disturbing for the economy as a whole [page ]: taking the aggregate economy number as a representative example, research productivity declines at an average rate of . percent per year, meaning that it takes around years for research productivity to fall by half. or put another way, the economy has to double its research efforts every years just to maintain the same overall rate of economic growth. source the annual rate of change in us r&d expenditure is shown in the graph above. it was below . % for of the years before . source it is therefore likely that inadequate r&d expenditure was a significant contributor to the approximate halving of the rate of growth of us gdp over the same period. the rate at which research productivity has fallen in semiconductors is significantly higher than in other areas of the economy ( . % vs. . %) [page ]: research productivity for semiconductors falls so rapidly, not because that sector has the sharpest diminishing returns — the opposite is true. it is instead because research in that sector is growing more rapidly than in any other part of the economy, pushing research productivity down. a plausible explanation for the rapid research growth in this sector is the “general purpose” nature of information technology. demand for better computer chips is growing so fast that it is worth suffering the declines in research productivity there in order to achieve the gains associated with moore’s law. or even the smaller gains associated with growth significantly slower than moore's law. industry projections for the kryder rate of both ssds and hdds depend heavily on rapid progress in density, i.e. on the products of r&d investment. flash is a very competitive market, and although hard disk is down to . manufacturers, which might suggest improving margins, hard disks are under sustained margin pressure from flash. thus falling research productivity has a particular impact on the future of storage, because neither the ssd nor the hdd markets can sustain the large increases in r&d spending needed to increase, or even sustain, their kryder rates. bloom et al bolsters the case for low and falling kryder rates. this paragraph on [page ]: is perhaps the most interesting of the whole paper: the only reason models with declining research productivity can sustain exponential growth in living standards is because of the key insight from [endogenous growth theory]: ideas are nonrival. and if research productivity were constant, sustained growth would actually not require that ideas be nonrival; akcigit, celik and greenwood show that fully rivalrous ideas in a model with perfect competition can generate sustained exponential growth in this case. our paper therefore clarifies that the fundamental contribution of endogenous growth theory is not that research productivity is constant or that subsidies to research can necessarily raise growth. rather it is that ideas are different from all other goods in that they do not get depleted when used by more and more people. exponential growth in research leads to exponential growth in [research expenditure]. and because of nonrivalry, this leads to exponential growth in per capita income. it is a strong argument for open source and open science. i believe that the problem of declining research productivity is related to "cost disease", as explained by scott alexander in considerations on cost disease which starts: tyler cowen writes about cost disease. ... cowen seems to use it indiscriminately to refer to increasing costs in general – which i guess is fine, goodness knows we need a word for that. alexander shows that inflation-adjusted costs have increased rapidly with no corresponding increase in output in several us areas, including: source k through education: there was some argument about the style of this graph, but as per politifact the basic claim is true. per student spending has increased about . x in the past forty years even after adjusting for inflation. at the same time, test scores have stayed relatively stagnant. you can see the full numbers here, but in short, high school students’ reading scores went from in to today – a difference of . % nb: not inflation-adjusted university education: inflation-adjusted cost of a university education was something like $ /year in . now it’s closer to $ , /year. no, it’s not because of decreased government funding, and there are similar trajectories for public and private schools. i don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment. do you think that modern colleges provide $ , /year greater value than colleges did in your parents’ day? would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $ , ? source per-capita health expenditure: the cost of health care has about quintupled since . it’s actually been rising since earlier than that, but i can’t find a good graph; it looks like it would have been about $ in today’s dollars in , for an increase of about % in those fifty years. ... this study attempts to directly estimate a %gdp health spending to life expectancy conversion, and says that an increase of % gdp corresponds to an increase of . years life expectancy. that would suggest a slightly different number of . years life expectancy gained by healthcare spending since ) alexander writes: i worry that people don’t appreciate how weird this is. i didn’t appreciate it for a long time. i guess i just figured that grandpa used to talk about how back in his day movie tickets only cost a nickel; that was just the way of the world. but all of the numbers above are inflation-adjusted. these things have dectupled in cost even after you adjust for movies costing a nickel in grandpa’s day. they have really, genuinely dectupled in cost, no economic trickery involved. and this is especially strange because we expect that improving technology and globalization ought to cut costs. the fields alexander uses as examples have a lot of human input, but they should have reaped significant cost benefits from technology and globalization. as he writes about health care: patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand. note that r&d is also a human-intensive business that should have reaped significant cost savings from technology and globalization. but like these other fields, it has increased massively in price. in fact, -fold instead of -fold. cowen and alexander are on to a really significant problem for the economy as a whole. posted by david. at : am labels: intellectual property, storage costs comments: chris rusbridge said... nice post again, david. have you come across "do the math"? it is (was?) a blog (https://dothemath.ucsd.edu) by a ucsd physicist (tom murphy), about sustainable energy, or at least how to reduce the exponential growth in energy consumption. he hasn't posted for a long time, but something in what you wrote reminded me of his starting premise, which is that in a finite world exponential growth in resource consumption is impossible. ok, it's a very different case, but there seem some parallels. the moore's law issue should remind us that many of these "exponentials" are really the bottom part of an s curve, limited as we bump up against natural barriers. so as you get towards the top part of the s curve it shouldn't be surprising if research productivity drops off, even -fold. but surely the effect will be field-specific, related to the actual hard limits, so it's not appropriate to transfer the loss of research productivity from on (measurable) field to another? as to health care costs, it does bug me how much costs have risen. you will no doubt be aware of the many crises afflicting our health care in the uk, and it appears us health care is not immune (and an even higher proportion of gdp). annoyingly we see a lot of articles about costs through over use of a&e, bed blocking because of lack of social care to move older people out of hospital, and other articles hammering the government about lack of funding (to which the government irrelevantly replies that they have increased funding). but i don't see analyses of where the cost pressures are coming from. there have been many rounds of hospital closures and mergers, and other "efficiencies" so you'd think things would get better, but they seem to get worse. i would guess there are at least other factors. one is that the endless rounds of "reforms" have greatly increased the "managerial" cost of the nhs, so that a decreasing proportion of staffing effort is directly involved in healthcare. another (i guess) is price gouging by pharmaceutical companies (perhaps i should say profit maximisation rather than price gouging!). and the third would be the increasing use of very high cost health technologies, like proton beam scanners etc. these simply weren't available in the past, but are now (presumably with concomitant improvements in health outcomes), so you'd expect above-inflation cost increases to pay for them. there are efficiencies as you describe (electronic booking etc), but the nhs is slow to turn, and much stuff still happens the other way. my daughter works in healthcare on two sites; they don't have an electronic health record system, and frequently the paper records end up on the wrong site. sometimes she drives to the other site to get them, sometimes they are sent in a taxi. seems daft, but if you are as risk-averse as the nhs, perhaps not so surprising! lost the text of your piece, an annoying blogger feature, but i think those are the points i wanted to make! april , at : am david. said... john horgan posts is science hitting a wall? based in part on are ideas getting harder to find? but is focused on the r rather than the d: "the economists are concerned primarily with what i would call applied science, the kind that fuels economic growth and increases wealth, health and living standards. advances in medicine, transportation, agriculture, communication, manufacturing and so on. but their findings resonate with my claim in the end of science that “pure” science—the effort simply to understand rather than manipulate nature--is bumping into limits. and in fact i was invited to the session because an organizer had read my gloomy tract, which was recently republished." april , at : am david. said... john horgan's is science hitting a wall?, part is as good as part . he writes: "my last post, “is science hitting a wall?,” provoked lots of reactions. some readers sent me other writings about diminishing returns from research. one is “diagnosing the decline in pharmaceutical r&d efficiency,” published in nature reviews drug discovery in . the paper is so clever, loaded with ideas and relevant to science as a whole that i’m summarizing its main points here." the paper points out that: "the number of new drugs approved per billion u.s. dollars spent on r&d has halved roughly every years since ." april , at : am david. said... kelvin stott's -part series pharma's broken business model: part : an industry on the brink of terminal decline and part : scraping the barrel in drug discovery uses a simple economic model to show that the internal rate of return (irr) of pharma companies is already less than their cost of capital, and will become negative in . stott shows that this is a consequence of the law of diminishing returns; because the most promising research avenues (i.e. the ones promising the greatest return) are pursued first, the returns on a research dollar decrease with time. may , at : pm david. said... the economist has an interesting take on cost disease in the rising cost of education and health care is less troubling than believed: "the real culprit, the authors write, is a steady increase in the cost of labour—of teachers and doctors. that in turn reflects the relentless logic of baumol’s cost disease, named after the late william baumol, who first described the phenomenon. productivity grows at different rates in different sectors. it takes far fewer people to make a car than it used to—where thousands of workers once filled plants, highly paid engineers now oversee factories full of robots—but roughly the same number of teachers to instruct a schoolful of children. economists reckon that workers’ wages should vary with their productivity. but real pay has grown in high- and low-productivity industries alike. that, baumol pointed out, is because teachers and engineers compete in the same labour market." the article concludes: "these possibilities reveal the real threat from baumol’s disease: not that work will flow toward less-productive industries, which is inevitable, but that gains from rising productivity are unevenly shared. when firms in highly productive industries crave highly credentialed workers, it is the pay of similar workers elsewhere in the economy—of doctors, say—that rises in response. that worsens inequality, as low-income workers must still pay higher prices for essential services like health care. even so, the productivity growth that drives cost disease could make everyone better off. but governments often do too little to tax the winners and compensate the losers." july , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ▼  april ( ) michael nelson's fifteen minutes of fame cryptographers on blockchains all your tweets are belong to kannada your tax dollars at work natural redundancy john perry barlow rip emulating stephen hawking's voice falling research productivity ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: economics of evil dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, june , economics of evil back in march google announced that this weekend is the end of google reader, the service many bloggers and journalists used to use to read online content via rss. this wasn't the first service google killed, but because the people who used the service write for the web, the announcement sparked a lively discussion. because many people believe that commercial content platforms and storage services will preserve digital content for the long term, the discussion below the fold should be of interest here. paul krugman commented: google’s decision to shut down google reader has ... provoked a lot of discussion about the future of web-based services. the most interesting discussion, i think, comes from ryan avent, who argues that google has been providing crucial public infrastructure — but doesn’t seem to have an interest in maintaining that infrastructure. the ryan avent post at the economist's free exchange blog that krugman linked to pointed out that dependence on google's services has a big impact on the real world: once we all become comfortable with [google's services] we quickly begin optimising the physical and digital resources around us. encyclopaedias? antiques. book shelves and file cabinets? who needs them? and once we all become comfortable with that, we begin rearranging our mental architecture. we stop memorising key data points and start learning how to ask the right questions. we begin to think differently. about lots of things. we stop keeping a mental model of the physical geography of the world around us, because why bother? we can call up an incredibly detailed and accurate map of the world, complete with satellite and street-level images, whenever we want. ... the bottom line is that the more we all participate in this world, the more we come to depend on it. the more it becomes the world. avent described google's motivation for providing the infrastructure: google has asked us to build our lives around it: to use its e-mail system ..., its search engines, its maps, its calendars, its cloud-based apps and storage services, its video- and photo-hosting services, ... it hasn't done this because we're its customers, it's worth remembering. we aren't; we're the products google sells to its customers, the advertisers. google wants us to use its services in ways that provide it with interesting and valuable information, and eyeballs. so, google may have good reasons for killing some services: it's a big company, but even big companies have finite resources, and devoting those precious resources to something that isn't making money and isn't judged to have much in the way of development potential is not an attractive option. ... someone else will come along to provide the service and, if they give it their full attention, to improve it. and indeed alternate services such as feedly are taking over. the only push-back for google is weak: but that makes it increasingly difficult for google to have success with new services. why commit to using and coming to rely on something new if it might be yanked away at some future date? google only partially mitigates this through their data liberation front, whose role is to ensure that users of their services can extract their data in a re-usable form. but there remains a problem for society: that's a lot of power to put in the hands of a company that now seems interested, mostly, in identifying core mass-market services it can use to maximise its return on investment. what this implies is that google is becoming an essential utility: the history of modern urbanisation is littered with examples of privately provided goods and services that became the domain of the government once everyone realised that this new life and new us couldn't work without them. paul krugman described the economics underlying this: even in a plain-vanilla market, a monopolist with high fixed costs and limited ability to price-discriminate may not be able to make a profit supplying a good even when the potential consumer gains from that good exceed the costs of production. basically, if the monopolist tries to charge a price corresponding to the value intense users place on the good, it won’t attract enough low-intensity users to cover its fixed costs; if it charges a low price to bring in the low-intensity user, it fails to capture enough of the surplus of high-intensity users, and again can’t cover its fixed costs. what avent adds is network externalities, in which the value of the good to each individual user depends on how many others are using it. ... they mean that if the monopolist still doesn’t find it worthwhile to provide the good, the consumer losses are substantially larger than in a conventional monopoly-pricing analysis. so what’s the answer? as avent says, historical examples with these characteristics — like urban transport networks — have been resolved through public provision. it seems hard at this point to envision search and related functions as public utilities, but that’s arguably where the logic will eventually lead us. it is indeed hard to envision these services as public utilities; they are world-wide rather than national or local, and the communication infrastructure on which they rely, despite being utility-like, is provided by for-profit, lightly regulated companies. what does this mean for digital preservation? "free" services, such as google drive, in which the user is the product rather than the customer should never be depended upon. at any moment they may suffer the fate of google reader and many other google services; because the user is not the customer there is essentially no recourse. over time any business model for extracting value from content will become less and less effective. there are several reasons. competitors will arise and decrease the available margins. as content accumulates, the average age of an item will increase; it is easier to extract value from younger content. thus the service is continually in a race between this decreasing effectiveness and the cost reduction from improving technology such as kryder's law for storage. once the cost reduction fails to outpace the value reduction the service is probably doomed. so the slowing pace of storage cost reduction is likely to cause more casualties, especially among those services that accumulate content. because archived content is rarely accessed it generates little valuable information and lacks network effects, making it a poor business for the provider. this casts another shadow over the future of free or all-you-can-eat storage services. although the user of services such as google cloud storage and s is a paying customer, digital preservation will be a very small part of the service's customer base. thus the incentives for the service provider to continue or terminate those aspects of the service that make it suitable for digital preservation will not be greatly different from those of a free service. because digital preservation is a relatively small market, services tailored to its needs will lack the economies of scale of the kind of infrastructure services avent and krugman are discussing. thus they will be more expensive. but the temptation will always be to implement them as a thin layer of customization over generic infrastructure services, as we see with duracloud and preservica, to capture as much of the economies of scale as possible. this leaves them vulnerable to the whims of the infrastructure provider. see also my comment on on-demand vs. base-load economics. posted by david. at : am labels: cloud economics, digital preservation, storage costs comments: david. said... marco arment convincingly tells the depressing story of why google reader had to die. it was distracting from the google + vs. facebook battle. july , at : am david. said... latitude is the latest casualty. again because it was not part of google+. july , at : pm david. said... scott gilbertson at the register writes about recovering from the death of reader, riffing on the "if you're not paying for something, you're not the customer; you're the product being sold." quote. but he points out that "just because you are paying companies like google, apple or microsoft you might feel they are, some how, beholden to you." dream on. november , at : am david. said... yet another google service bites the dust, this time helpout. february , at : pm david. said... hugh pickens at /. points me to le monde's google memorial, le petit musée des projets google abandonnés. who knew there were so many? march , at : pm david. said... and another one bites the dust. march , at : pm david. said... google plus isn't quite dead, but it is on life support. august , at : pm david. said... google talk is headed for the google memorial. march , at : pm david. said... "the goo.gl link is very common on the web and was first launched by google in . however, the company announced today that it’s winding down the url shortener beginning next month, with a complete deprecation by next year." writes abner li at to google. march , at : am david. said... google hangouts is headed for the google memorial. november , at : pm david. said... the allo messaging service is the next to enter the google memorial. december , at : pm david. said... g+ is on its way to the google memorial, and lauren weinstein is not happy with the reasons: "google knows that as time goes on their traditional advertising revenue model will become decreasingly effective. this is obviously one reason why they’ve been pivoting toward paid service models aimed at businesses and other organizations. that doesn’t just include g suite, but great products like their ai offerings, google cloud, and more. but no matter how technically advanced those products, there’s a fundamental question that any potential paying user of them must ask themselves. can i depend on these services still being available a year from now? or in five years? how do i know that google won’t treat business users the same ways as they’ve treated their consumer users?" nor is he happy with the process: "we already know about google’s incredible user trust failure in announcing dates for this process. first it was august. then suddenly it was april. the g+ apis (which vast numbers of web sites — including mine — made the mistake of deeply embedding into their sites, we’re told will start “intermittently failing” (whatever that actually means) later this month. it gets much worse though. while google has tools for users to download their own g+ postings for preservation, they have as far as i know provided nothing to help loyal g+ users maintain their social contacts — the array of other g+ followers and users with whom many of us have built up friendships on g+ over the years." january , at : am david. said... close behind g+ on the road to the google memorial is hangouts and ron amadeo is skeptical: "google previously announced that its most popular messaging app, google hangouts, would be shutting down. in a post today on the gsuite updates blog, google detailed what the hangouts shutdown will look like, and the company shared some of its plan to transition hangouts users to "hangouts chat," a separate enterprise slack clone." note that this is another instance of google moving up a previously announced shutdown date. and, like weinstein, amadeo sees impacts on the google brand: "google's argument seems to be that the transition plan makes everything ok. but clumsy shutdowns like this are damaging to the google brand, and they undermine confidence in all of google's other products and services." january , at : am david. said... ron amadeo takes google to the woodshed in google’s constant product shutdowns are damaging its brand: "it's only april, and has already been an absolutely brutal year for google's product portfolio. the chromecast audio was discontinued january . youtube annotations were removed and deleted january . google fiber packed up and left a fiber city on february . android things dropped iot support on february . google's laptop and tablet division was reportedly slashed on march . google allo shut down on march . the "spotlight stories" vr studio closed its doors on march . the goo.gl url shortener was cut off from new users on march . gmail's ifttt support stopped working march . and today, april , we're having a google funeral double-header: both google+ (for consumers) and google inbox are being laid to rest. later this year, google hangouts "classic" will start to wind down, and somehow also scheduled for is google music's "migration" to youtube music, with the google service being put on death row sometime afterward. we are days into the year, and so far, google is racking up an unprecedented body count. if we just take the official shutdown dates that have already occurred in , a google-branded product, feature, or service has died, on average, about every nine days." april , at : am david. said... of course, other companies shutter services too, just somewhat less irresponsibly. cory doctorow's microsoft announces it will shut down ebook program and confiscate its customers' libraries reports: "microsoft has a drm-locked ebook store that isn't making enough money, so they're shutting it down and taking away every book that every one of its customers acquired effective july . customers will receive refunds." april , at : pm david. said... and the next entrant for le petit musée des projets google abandonnés is youtube gaming. ron amadeo reports: "youtube gaming is more or less shutting down this week. google launched the standalone youtube gaming vertical almost four years ago as a response to amazon's purchase of twitch, and on may , google will shut down the standalone youtube gaming app and the standalone gaming.youtube.com website." may , at : am david. said... take a number! ron amadeo reports that this week’s dead google product is google trips, may it rest in peace: "google's wild ride of service shutdowns never stops. next up on the chopping block is google trips, a trip organization app that is popular with frequent travelers. recently google started notifying users of the pending shutdown directly in the trips app; a splash screen now pops up before the app starts, saying "we're saying goodbye to google trips aug ," along with a link to a now all-to-familiar google shutdown support document." june , at : am david. said... another product joins the queue of google products waiting to get into le petit musée des projets google abandonnés. ron amadeo reports: "another day, another dead or dying google product. this time, google has decided to shut down "hangouts on air," a fairly popular service for broadcasting a group video call live over the internet. notices saying the service is "going away later this year" have started to pop up for users when they start a hangout on air. hangouts on air, by the way, is a totally different and unrelated service from "google hangouts," which is also shutting down sometime in the future." june , at : pm david. said... lina m. khan's the separation of platforms and commerce discusses (page ) one of last year's acquisitions by le petit musée: "the justice department’s remedies in the google–ita merger illustrate one instance of imposing an information firewall in a digital market. ita developed and licensed a software product known as “qpx,” a “mini-search engine” that airlines and online travel agents used to provide users with customized flight search functionality. because the merger would put google in the position of supplying qpx to its rival travel-search websites, the justice department required as a condition of the merger that google establish internal firewalls to avoid misappropriation of rivals’ information. although one commentator highlighted the risks and inherent difficulties associated with designing a comprehensive behavioral remedy, the court approved the order. whether the information firewall was successful in preventing google from accessing rivals’ business information is not publicly known. a year after the remedy expired, google shut down its qpx api." june , at : pm david. said... slashdot reader freelunch reports on le petit musée's latest acquisition. august , at : pm david. said... greg kumparak reports on one of next year's acquisitions by le petit musée in google will shut down google hire in . august , at : am david. said... jason perlow's google is a bald-faced iot liar and its nest pants are on fire is all about the effects of google's sunset of the "works with nest" program on anyone unlucky enough to have non-google devices in their home iot ecosystem. september , at : pm david. said... le petit musée just gained another exhibit, the corpse of google daydream, according to emil protalinski's obituary in google discontinues daydream vr. october , at : am david. said... le petit musée's reputation is getting noticed. in stadia launch dev: game makers are worried “google is just going to cancel it”, kyle orland reports that: "google has a long and well-documented history of launching new services only to shut them down a few months or years later. and with the launch of stadia imminent, one launch game developer has acknowledged the prevalence of concerns about that history among her fellow developers while also downplaying their seriousness in light of stadia's potential." november , at : pm david. said... next year le petit musée will accession another major exhibit. according to stephen hall, google cloud print is dead as of december , . november , at : pm david. said... it only too three days for le petit musée's first accession of the new year.shelby brown's google news reportedly ends digital magazines, refunds active subscriptions. it was doomed because google couldn't decide what to call it: "google launched play magazines in , and later renamed it play newsstand to focus on newspapers. the services later merged to google news." january , at : pm david. said... and the next exhibit in le petit musée' is [drumroll, please] google chrome apps! january , at : pm david. said... at this rate le petit musée's acquisition budget for the year will be exhausted long before december! only days after the last one, comes the acquisition of app maker. january , at : pm david. said... they're falling like ninepins! ben schoon reports that google is killing its ‘one today’ donation app w/ only one week’s notice. january , at : pm david. said... thomas claiburn reports that another body for the google graveyard: chrome web store payments. bad news if you wanted to bank some income from these apps. september , at : pm david. said... ron amadeo reports on the latest acquisition by the petit musee in rip google play music, – . october , at : pm david. said... and another one bites the dust. matthew hughes provides the details of the latest from the petit musee in stony-faced google drags android things behind the cowshed. two shots ring out. december , at : am david. said... the hits keep coming! cohen coberly's google's 'cloud print' service is shutting down soon celebrates the petit musée's latest acquisition. exhibits in years. december , at : am david. said... the petit musée is going to need more space. in google’s dream of flying internet balloons is dead—loon is shutting down ron amadeo writes: "google has decided that a network of flying internet balloons is indeed not a feasible idea. loon announced it is shutting down, citing the lack of a "long-term, sustainable business."" january , at : am david. said... and another one bites the dust. stephen totilo reports that google stadia shuts down internal studios, changing business focus. february , at : pm david. said... another acquisition for the petit musée! ron amadeo reports that google is killing the google shopping app: "the google shopping app launched only months ago, when it took over for another google shopping shutdown, google express. the google shopping service has been a rough proposition for users—starting in , it has been nothing but an ad vector that exclusively showed "paid listings" and no organic results whatsoever. this made some sense as a service that showed advertisements in little embedded boxes in google.com search results, but it was unclear why a user would download an app that exclusively shows ads." april , at : pm david. said... ron amadeo reports on le petit musée's latest acquisition in google kills its augmented reality “measure” app. june , at : pm david. said... i missed alastair westgarth's saying goodbye to loon back in january: "while we’ve found a number of willing partners along the way, we haven’t found a way to get the costs low enough to build a long-term, sustainable business. developing radical new technology is inherently risky, but that doesn’t make breaking this news any easier. today, i’m sad to share that loon will be winding down." june , at : pm david. said... katie baker's the day the good internet died makes some excellent points: "and when google reader disappeared in , it wasn’t just a tale of dwindling user numbers or of what one engineer later described as a rotted codebase. it was a sign of the crumbling of the very foundation upon which it had been built: the era of the good internet. ... the offering certainly was, over the course of its to existence, an ideal showcase for some of the web’s prevailing strengths at the time—one of which was the emergence of the blogosphere, that backbone of what the good internet could be. ... google had purchased blogger back in , and in an interview with playboy in , founder sergey brin outlined a vision for his company that felt extremely bloggy: “we want to get you out of google and to the right place as fast as possible,” he said. but nothing gold can stay, and as smartphones and tablets and apps and social networks began to supplement and then supplant the simple-text-on-a-desktop experience near the start of the s, google’s corporate frame of mind shifted ever-inward." july , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ▼  june ( ) economics of evil the big deal petabyte dvd? not trusting cloud storage cliff lynch brief talk at elpub ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. template name: characters remaining: template description: characters remaining: name: filename: select the destination folder: select a folder report edit test activate deactivate save as template copy delete undelete archive set as in progress form content only unsubscribe form custom some non-required fields on this form are blank. do you wish to save with blank values? loading... join the dlf forum newsletter mailing list! email: * first name * last name * institution title opt-in to join the list dlf forum news please verify that you are not a robot. sign up dlf never shares your information. but we do like to share information with you! franz kafka's the trial—it's funny because it's true | jstor daily skip to content where news meets its scholarly match newsletters about jstor daily arts & culture art & art history film & media language & literature performing arts business & economics business economics politics & history politics & government u.s. history world history social history quirky history science & technology health natural science plants & animals sustainability & the environment technology education & society education lifestyle religion social sciences newsletters contact the editors support jstor daily arts & culture franz kafka’s the trial—it’s funny because it’s true just because you’re paranoid doesn’t mean they’re not out to get you. jeremy irons plays kafka in steven soderbergh's kafka. © miramax . by: benjamin winterhalter july , july , minutes share tweet email print in franz kafka’s novel the trial, first published in , a year after its author’s death, josef k. is arrested, but can’t seem to find out what he’s accused of. as k. navigates a labyrinthine network of bureaucratic traps—a dark parody of the legal system—he keeps doing things that make him look guilty. eventually his accusers decide he must be guilty, and he is summarily executed. as kafka puts it in the second-to-last chapter, “the cathedral:” “the proceedings gradually merge into the judgment.” kafka’s restrained prose—the secret ingredient that makes this story about a bank clerk navigating bureaucracy into an electrifying page-turner—trades on a kind of dramatic irony. as the novelist david foster wallace noted in his essay “laughing with kafka,” this is kafka’s whole schtick, and it’s what makes him so funny. by withholding knowledge from the protagonist and the reader, kafka dangles the promise that all will be revealed in the end. but with every sentence the reader takes in, it feels increasingly likely that the reason for k.’s arrest will remain a mystery. as the trial follows its tragic path deeper into k.’s insular, menacing, and sexualized world, it gradually becomes clear that the answer was never forthcoming. in fact, kafka hints at the narrator’s ignorance at the very beginning: “someone must have slandered josef k., for one morning, without having done anything wrong, he was arrested.” why the conjecture about what “must have” happened unless the narrator, who relates the story in the past tense, doesn’t know? that ignorance sets up the humor: during certain particularly insane moments in k.’s journey, the frustration of not knowing why he’s enduring it all becomes unbearable, at which point there’s no choice but to laugh. weekly digest get your fix of jstor daily’s best stories in your inbox each thursday. privacy policy   contact us you may unsubscribe at any time by clicking on the provided link on any marketing message. into the unreal many commentators on the trial have observed a sense of unreality in the novel, a feeling that something is somehow “off” that hangs like a fog over kafka’s plotline. the philosopher hannah arendt, for instance, wrote in her well-known essay on kafka: “in spite of the confirmation of more recent times that kafka’s nightmare of a world was a real possibility whose actuality surpassed even the atrocities he describes, we still experience in reading his novels and stories a very definite feeling of unreality.” for arendt, this impression of unreality derives from k.’s internalization of a vague feeling of guilt. that all-pervasive guilt becomes the means to secure k.’s participation in a corrupt legal system. she writes: this feeling, of course, is based in the last instance on the fact that no man is free from guilt. and since k., a busy bank employee, has never had time to ponder such generalities, he is induced to explore certain unfamiliar regions of his ego. this in turn leads him into confusion, into mistaking the organized and wicked evil of the world surrounding him for some necessary expression of that general guiltiness… in other words, arendt reads the trial as a kind of controlled descent into madness and corruption, ending in a violent exaggeration of the knowledge that nobody’s perfect. the literary scholar margaret church expanded on these psychological themes in an article in twentieth century literature, pointing to “the dreamlike quality of time values and the assumption of an interior time” employed throughout the trial. according to church, this unsteady temporality implies that most of what happens in the trial isn’t real—or isn’t fully real, at any rate. in fact, she suggests, it makes as much sense to assume that “the characters are projections of k.’s mind.” likewise, the literary scholar keith fort remarked in an article in the sewanee review that it’s “the nightmarish quality of unreality that has made kafka’s name synonymous with any unreal, mysterious force which operates against man.” in other words, this type of unreality has become so closely associated with kafka that the best word to describe it is, circularly, kafkaesque. jeremy irons in steven soderbergh’s kafka ( ). license to kafka writing in critical inquiry, however, the art historian otto karl werckmeister charged that many kafka interpreters, including arendt, were engaging with a politicized fantasy of what kafka represents, rather than with the real kafka. for werckmeister, this fantastical reading of kafka’s life was best captured in steven soderbergh’s movie kafka, which casts jeremy irons as “kafka the brooding office clerk turned underworld agent”—a device that werckmeister calls “kafka ” (after, of course, the james bond franchise). some of these interpreters were motivated by a desire to dismiss kafka as a handwringing bourgeois do-nothing—or even a “pre-fascist.” others expressed a willingness to excuse kafka’s alleged “indifference to social policy” by appealing to his loose associations with socialist or anarchist circles. all of them, werckmeister argues, are missing the obvious: they underestimate kafka’s role as a lawyer at the worker’s accident insurance institute, in prague, where he imposed workplace safety regulations on unwilling industrial employers: thus at the highest echelons of a semipublic, government-sanctioned institution enacting social policy, kafka’s job was to regulate the social conduct of employers vis-à-vis the working class… the employers under kafka’s supervision tenaciously resisted the application of recent austrian social policy laws, which were adapted from bismarck’s legislation in germany. they contested their risk classifications, disregarded their safety norms, tried to thwart plant inspections, and evade their premium payments. the department headed by kafka was pitted against them in an adversarial relationship, no matter how conciliatory the agency’s mission was meant to be. kafka’s tales are a reflection of the deep obstacles to progress he perceived in the social reality of his time. he even “anticipated the political self-critique of literature to the point of its nonpublication,” keeping most of his writing private, then asking that the manuscripts for the trial and the castle be destroyed upon his death (a wish that was, thankfully, not honored). but the critical focus has been trained on the received version of kafka, an interpretation of his life derived under exigent circumstances—the eruption of fascism in europe—for ideological purposes. it has been trained, so to speak, on the kafkaesque, instead of on kafka himself. what about the plunging sense of unease—like a feeling of falling—that no one can quite seem to shake when they first encounter kafka’s stories? for werckmeister, it’s true that kafka’s fiction ultimately offers the most reliable guide to his political orientation, provided we understand that fiction in the context of his professional life. at work, he took the side of the working class—indeed, he represented its interests in a struggle against capital. he was “a man who tried to live his life according to principles of humanism, ethics, even religion.” as a direct result of that experience, he learned the disturbing truth that, in the law, “lies are made into a universal system,” as he wrote in the penultimate chapter of the trial. the best he could manage within the law still would be a far cry from real justice (which, kafka also knew, would have to include sexual justice to be anywhere near complete). maybe werckmeister is right about the political motives of critics like arendt, but what about the plunging sense of unease—like a feeling of falling—that no one can quite seem to shake when they first encounter kafka’s stories? “the trial” by wolfgang letti ( ). “it’s funny because it’s true” i’m here to suggest, following werckmeister, that this feeling results from the fact that kafka’s stories, despite their bizarre premises, are unnervingly real. although there is undoubtedly an element of the absurd in the worlds kafka creates, his style—unpretentious and specific, yet free from slang—renders those worlds with such painful accuracy that they seem totally familiar while we’re in them, like déjà vu or a memory of a bad dream: k. turned to the stairs to find the room for the inquiry, but then paused as he saw three different staircases in the courtyard in addition to the first one; moreover, a small passage at the other end of the courtyard seemed to lead to a second courtyard. he was annoyed that they hadn’t described the location of the room more precisely; he was certainly being treated with strange carelessness or indifference, a point he intended to make loudly and clearly. then he went up the first set of stairs after all, his mind playing with the memory of the remark the guard willem had made that the court was attracted by guilt, from which it actually followed that the room for the inquiry would have to be located off whatever stairway k. chanced to choose. the time-bending nature of kafka’s prose, then, shouldn’t be seen as a pathological formalism—a linguistically engineered unreality—but as a reflection of kafka’s intuitive understanding of the emerging principles of modern physics, in which time itself is relative. a article by the renowned physicist werner heisenberg suggests that a high-level awareness of modern physics defines and structures the modernist sensibility in art. while “there is little ground for believing that the current world view of science has directly influenced the development of modern art,” still “the changes in the foundations of modern science are an indication of profound transformations in the fundamentals of our existence.” now that the cat is out of schrödinger’s bag (or rather, box): the old compartmentalization of the world into an objective process in space and time, on the one hand, and the soul in which this process is mirrored, on the other… is no longer suitable as the starting point for the understanding of modern science. in the field of view of this science there appears above all the network of relations between man and nature, of the connections through which we as physical beings are dependent parts of nature and at the same time, as human beings, make them the object of our thought and actions. the “unreality” in kafka that has captivated so many commentators is what best aligns, ironically, with the current scientific worldview, which sees its own understanding of reality as necessarily partial, limited, and relative. if time in the trial seems nonlinear, that’s only because the novel is so thoroughly modern; the uneven flow of time in the novel captures the dawning scientific realization that time is neither absolute nor universal. isn’t it, after all, the sense that kafka—the voice on the page—is firmly in touch with reality that makes it feel acceptable to laugh at the deranged goings-on in the trial? his jokes are technical achievements, yes, but they also speak to a feeling of loneliness that typifies the modern condition. kafka himself couldn’t resist laughing when asked to read aloud from his work. to orchestrate this kind of laughter—to borrow a word from wallace—might have offered relief from the relentless (and political) self-criticism that drove kafka to conceal his writings. kafka’s suppression of information gets us to let our emotional guard down. he contrives narrative tension so that he can shock us, confronting us anew with injustices to which we’ve become numb. share tweet email print have a correction or comment about this article? please contact us. comedyeuropean literaturefilmconjunctionscritical inquirydaedalusthe sewanee reviewtwentieth century literature resources jstor is a digital library for scholars, researchers, and students. jstor daily readers can access the original research behind our articles for free on jstor. from "the trial" by: franz kafka and breon mitchell conjunctions, no. , paper airplane: the thirtieth issue ( ), pp. - conjunctions time and reality in kafka's the trial and the castle by: margaret church twentieth century literature, vol. , no. (jul., ), pp. - duke university press the function of style in franz kafka's "the trial" by: keith fort the sewanee review, vol. , no. (autumn, ), pp. - the johns hopkins university press kafka by: o. k. werckmeister critical inquiry, vol. , no. (winter, ), pp. - the university of chicago press the representation of nature in contemporary physics by: werner heisenberg daedalus, vol. , no. , symbolism in religion and literature (summer, ), pp. - the mit press on behalf of american academy of arts & sciences join our newsletter get your fix of jstor daily’s best stories in your inbox each thursday. privacy policy   contact us you may unsubscribe at any time by clicking on the provided link on any marketing message. read this next education & society wittgenstein on whether speech is violence when is speech violence? sometimes. it depends. that’s a complicated question. trending posts here’s why the cdc recommends indoor masks for the vaccinated healing and memory in ancient greece how “carpe diem” got lost in translation herbs & verbs: how to do witchcraft for real rednecks: a brief history more stories language & literature six cat poems that aren’t that owl and pussycat one there's nothing practical about these felines. meow. art & art history the philosophy of posthumous art for some creators, death isn’t the end of their career. how should we think about completing and releasing their work afterward? film & media asthma tropes and the kids who hate them children with asthma respond to the movie executives who see them as weak people helped by magical inhalers. film & media the photographers who captured the great depression the farm security administration had photographers fan out across the country to document agricultural conditions. but they brought back much more. recent posts six cat poems that aren’t that owl and pussycat one tidal power: a forgotten renewable resource? the philosophy of posthumous art policing the bodies of women athletes is nothing new the permanent crisis of infrastructure support jstor daily help us keep publishing stories that provide scholarly context to the news. become a member about us jstor daily provides context for current events using scholarship found in jstor, a digital library of academic journals, books, and other material. we publish articles grounded in peer-reviewed research and provide free access to that research for all of our readers. contact the editors masthead newsletters about us submission guidelines rss support jstor daily jstor.org terms and conditions of use privacy policy cookie policy accessibility jstor is part of ithaka, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. © ithaka. all rights reserved. jstor®, the jstor logo, and ithaka® are registered trademarks of ithaka. sign up for our weekly newsletter get your fix of jstor daily’s best stories in your inbox each thursday. privacy policy   contact us you may unsubscribe at any time by clicking on the provided link on any marketing message. × documents accompanying the journal of the house of representatives of the ... - michigan. legislature. house of representatives - google books search images maps play youtube news gmail drive more » sign in books try the new google books check out the new look and enjoy easier access to your favorite features try it now no thanks try the new google books try the new google books my library help advanced book search download epub download pdf plain text ebook - free get this book in print abebooks.co.uk find in a library all sellers » reviewswrite review documents accompanying the journal of the house of representatives of the ... by michigan. legislature. house of representatives   about this book terms of service    plain text pdf epub dshr's blog: talk at pda dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. saturday, february , talk at pda i spoke at this year's personal digital archiving conference at the internet archive, following on from my panel appearance there a year ago. below the fold is an edited text of the talk with links to the sources. at last year's pda i sparked a lively discussion with my panel appearance called paying for long-term storage. i'm hoping to leave enough time for a similar discussion this year. last year's talk covered the three possible business models for long-term storage, and focused on endowment as being the only really viable one. endowment involves depositing the data together with a capital sum sufficient to pay for its storage indefinitely. the reason endowment is thought to be feasible is kryder's law, the -year history of exponential increase in disk capacity at roughly constant cost. provided that it continues for another decade or so after you deposit your data, the endowment model works. unfortunately, exponential growth curves never continue indefinitely. at some point, they stop. this leaves us with two intertwined questions: how long can we expect kryder's law to continue? how much should we charge per tb? the questions are intertwined because, obviously, the sooner kryder's law stops the more we have to charge. i was hoping that finding out how to answer these questions would be somebody else's problem. but it turned out to be my problem after all. i've been working for the library of congress on using cloud storage for a lockss box (pdf). it turns out that there are several meanings of "using cloud storage for a lockss box", and i have some of them actually working. but as i was starting to write up this work, i realised that the question i was going to get asked was "does it make economic sense to use cloud storage for a lockss box?" a real lockss box has both capital and running costs, whereas a virtual lockss box in the cloud has only running costs. for an apples-to-apples comparison, i need to compare cash flows through time. economists have a standard technique for comparing costs through time, called discounted cash flow (dcf). the idea is that needing to pay a dollar in a year is the same as investing less than a dollar now so that the investment plus the accrued interest in a year will be the dollar i need to pay. simple. in all the textbooks. but when i looked into it, two problems emerged. first, it doesn't work in practice. you need to know what interest rate to use. here is research from the bank of england (pdf) showing that the interest rates investors use are systematically wrong, in a way that makes endowing data, or making any other long-term investment, very difficult. second, it doesn't even work in theory. here is research from doyne farmer of the santa fe institute and john geankoplos of yale (pdf), pointing out that (assuming you could choose the correct interest rate) using a fixed real interest rate would be ok if the outcome was linearly related to the interest rate. but it isn't. using a constant interest rate averages out periods (like the s) of high interest rates and periods (like now) of very low (or negative) real interest rates. in order to model long-term investments, you need to use monte carlo techniques with an interest rate model. similarly, if we assume that in the future storage costs will drop at varying rates, we need to use monte carlo techniques with a storage cost model. why would we believe that in the future storage costs will drop at varying rates? five reasons come to mind: first, they just did. the floods in thailand increased disk prices by - % almost overnight. these increased prices have flattened in recent months, but are expected to remain above trend for at least a year. second, you might be wanting to use the well-known and increasingly popular "affordable cloud storage". here's a table of the price history of four major cloud storage providers showing that the best case is a % per year price drop. that's % not %. third, disk manufacturers are already finding further increases in density difficult. to stay on the curve we should have had tb disks by the middle of last year at the latest, but all we have are tb drives. the transition to future disk technologies such as harm and bpm is being delayed, and desperate measures, called "shingled writes", are under way to build a th generation of the current technology, pmr. shingled writes means, among other problems, that disks are no longer randomly writable. they become an append-only medium.  fourth, even if we assume that kryder's law continues, we are in for a pause in the cost drop. the market for . " disks is desktop pcs, which is collapsing. the volume consumer market is now . " drives, which are on the same curve, just at a higher price per byte. and the life of the . " form factor is also limited. if kryder's law continues until we should in theory have a $ . " drive holding tb. but no-one is going to build this drive because no-one wants tb on their laptop. how would you back it up? they would much rather have a tb " drive for $ and much less power draw. fifth, there is a hard theoretical limit to the minimal size of a magnetic domain at the temperatures in a disk drive. this means kryder's law for magnetic disks pretty much has to stop by at the latest, and probably much earlier. mark kryder and chang soo kim of c-mu compared the various competing solid state technologies with the tb . " drive (pdf), and none of them looked like good candidates for continued rapid drop in storage costs beyond there. so, we need a monte carlo model. i started building one, and it rapidly became clear that this was a problem much bigger than i could solve on my own. so we have started up a research program at uc santa cruz and stony brook university, with help from netapp. i'm about to show you some early results from this collaboration. i need to stress that this is very much work in progress. we are just at the stage of trying to understand what a comprehensive model would look like, by building simple models and seeing if they produce plausible results. the first model is work by daniel rosenthal (no relation) of ucsc. it follows a unit of storage capacity, as it might be a shelf of a filer, as the demand for storage grows, disks fail or age out and are replaced by drives storing more data, and power and other running costs are incurred. daniel's model doesn't account for the time value of money, so it can only be used for short periods. here is a graph reproducing the well-known fact that drives (or tapes in a robot) are replaced when the value of the data they hold is no longer enough to justify the space they take up, not when their service life expires. with daniel's parameters, the optimum drive replacement age is under years. the second model is my initial simulation. it follows a unit of data, say a tb, as it migrates between media as they are replaced, occupying less and less of them. unlike daniel's model, this one uses an interest rate model to properly account for the time value of money. in this case interest rates are based on the last years. here are about a thousand runs of the model. we gradually increase the endowment and each time see what probability we have of surviving years without running out of money. as you see, if storage media prices are, as we assumed, dropping % a year the variation in interest rates doesn't have a big effect. here are a few million runs of the model, varying the kryder's law rate and the endowment to get a d graph. if we take the % contour of this graph, we get the next graph. this shows the relationship between the endowment needed for a % probability of not running out of money in years, and the rate of kryder's law decrease in cost per byte, which we assume to be constant. the plausible result is that the endowment is relatively insensitive to the kryder's law rate if it is large, say above %/yr. but if it is small, say below %/yr, the endowment is rather sensitive to the rate. this is one of the key insights from our work so far. storage industry experts disagree about the details but agree that the big picture is that kryder's law is slowing down. thus we're moving from the right, flat side of the graph to the left, steep side. despite being in a region of the graph where the cost is relatively low and easy to predict, the economic sustainability of digital preservation has been a major concern. going forward, digital preservation faces two big problems: the cost of preserving a unit of data will increase. the uncertainty about the cost of preserving a unit of data will increase. the next graph applies the model to cloud storage, assuming an initial cost of cents/gb/yr and interest rates from the last years. we compute the endowment needed per tb for various rates of cost decrease. for example, if costs decrease at the % rate of the last years of s , we need $ k/tb. this is a lot of money. it is clearly possible that prices in the first years of cloud storage were an anomaly, and in the near future they will start dropping as quickly as media prices. but the media price drop is slowing, and s does not appear to be under a lot of pricing pressure. unless things change, cloud storage is simply too expensive for long-term use. the last graph shows the effect on the endowment of a spike that doubles disk prices a number of years into the life of the data. the y= line has no spike for comparison. as expected, the effect is big if the kryder's law drop is slow and the spike is soon. note the ridge, which shows that if the spike happens at the -year life i assumed for the drives, you are in trouble. as i said, we are at the very early stages of this work. it has turned out to be a lot more interesting and difficult than i could have imagined when i spoke here last year. some of the improvements we're looking at are pluggable alternate models for interest rates (the last years may not be representative) and technology evolution (we want to model the introduction of new technologies with very different properties). we want to use the these initial models to study questions such as: how does the increasing short-termism discovered by the bank of england affect the endowment required? how can we choose between storage technologies with different cost structures, such as tape, disk and solid state, as their costs evolve at different rates? can cloud storage services compete for long-term storage? by next year, we hope to have a simulation that is realistic enough for you to use for scenario planning. we are anxious to learn if you think a simulation of this kind would be useful, and what questions you would like to ask it. posted by david. at : pm labels: pda , storage costs comment: david. said... one interesting point that came out in questions after my talk. the internet archive is offering an endowment storage service - the cost is times the cost of the raw disk. given the replication, ingest and other costs, this number roughly matches the output from the model with reasonable kryder's law assumptions. march , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ▼  february ( ) talk at pda fast cloud storage pricing history tide pools and terrorists domain name persistence ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. introduction - the cargo book introduction . getting started . . installation . . first steps with cargo . cargo guide . . why cargo exists . . creating a new package . . working on an existing package . . dependencies . . package layout . . cargo.toml vs cargo.lock . . tests . . continuous integration . . cargo home . . build cache . cargo reference . . specifying dependencies . . . overriding dependencies . . the manifest format . . . cargo targets . . workspaces . . features . . . features examples . . profiles . . configuration . . environment variables . . build scripts . . . build script examples . . publishing on crates.io . . package id specifications . . source replacement . . external tools . . registries . . dependency resolution . . semver compatibility . . unstable features . cargo commands . . general commands . . . cargo . . . cargo help . . . cargo version . . build commands . . . cargo bench . . . cargo build . . . cargo check . . . cargo clean . . . cargo doc . . . cargo fetch . . . cargo fix . . . cargo run . . . cargo rustc . . . cargo rustdoc . . . cargo test . . manifest commands . . . cargo generate-lockfile . . . cargo locate-project . . . cargo metadata . . . cargo pkgid . . . cargo tree . . . cargo update . . . cargo vendor . . . cargo verify-project . . package commands . . . cargo init . . . cargo install . . . cargo new . . . cargo search . . . cargo uninstall . . publishing commands . . . cargo login . . . cargo owner . . . cargo package . . . cargo publish . . . cargo yank . faq . appendix: glossary . appendix: git authentication light (default) rust coal navy ayu the cargo book the cargo book cargo is the rust package manager. cargo downloads your rust package's dependencies, compiles your packages, makes distributable packages, and uploads them to crates.io, the rust community’s package registry. you can contribute to this book on github. sections getting started to get started with cargo, install cargo (and rust) and set up your first crate. cargo guide the guide will give you all you need to know about how to use cargo to develop rust packages. cargo reference the reference covers the details of various areas of cargo. cargo commands the commands will let you interact with cargo using its command-line interface. frequently asked questions appendices: glossary git authentication other documentation: changelog — detailed notes about changes in cargo in each release. rust documentation website — links to official rust documentation and tools. library hat library hat http://www.bohyunkim.net/blog/ skip to content ↓ bohyunkim.net about publications presentations cv / resume blockchain: merits, issues, and suggestions for compelling use cases jul th, by bohyun (library hat). comments are off for this post * this post was also published in acrl techconnect.*** blockchain holds a great potential for both innovation and disruption. the adoption of blockchain also poses certain risks, and those risks will need to be addressed and mitigated before blockchain becomes mainstream. a lot of people have heard of blockchain at this point. but many are unfamiliar with how this new technology exactly works and unsure about under which circumstances or on what conditions it may be useful to libraries. in this post, i will provide a brief overview of the merits and the issues of blockchain. i will also make some suggestions for compelling use cases of blockchain at the end of this post. what blockchain accomplishes blockchain is the technology that underpins a well-known decentralized cryptocurrency, bitcoin. to simply put, blockchain is a kind of distributed digital ledger on a peer-to-peer (p p) network, in which records are confirmed and encrypted. blockchain records and keeps data in the original state in a secure and tamper-proof manner[ ] by its technical implementation alone, thereby obviating the need for a third-party authority to guarantee the authenticity of the data. records in blockchain are stored in multiple ledgers in a distributed network instead of one central location. this prevents a single point of failure and secures records by protecting them from potential damage or loss. blocks in each blockchain ledger are chained to one another by the mechanism called ‘proof of work.’ (for those familiar with a version control system such as git, a blockchain ledger can be thought of as something similar to a p p hosted git repository that allows sequential commits only.[ ]) this makes records in a block immutable and irreversible, that is, tamper-proof. in areas where the authenticity and security of records is of paramount importance, such as electronic health records, digital identity authentication/authorization, digital rights management, historic materials that may be contested or challenged due to the vested interests of certain groups, and digital provenance to name a few, blockchain can lead to efficiency, convenience, and cost savings. for example, with blockchain implemented in banking, one will be able to transfer funds across different countries without going through banks.[ ] this can drastically lower the fees involved, and the transaction will take effect much more quickly, if not immediately. similarly, adopted in real estate transactions, blockchain can make the process of buying and selling a property more straightforward and efficient, saving time and money.[ ] disruptive potential of blockchain the disruptive potential of blockchain lies in its aforementioned ability to render the role of a third-party authority obsolete, which records and validates transactions and guarantees their authenticity, should a dispute arise. in this respect, blockchain can serve as an alternative trust protocol that decentralizes traditional authorities. since blockchain achieves this by public key cryptography, however, if one loses one’s own personal key to the blockchain ledger holding one’s financial or real estate asset, for example, then that will result in the permanent loss of such asset. with the third-party authority gone, there will be no institution to step in and remedy the situation. issues this is only some of the issues with blockchain. other issues include (a) interoperability between different blockchain systems, (b) scalability of blockchain at a global scale with large amount of data, (c) potential security issues such as the % attack [ ], and (d) huge energy consumption [ ] that a blockchain requires to add a block to a ledger. note that the last issue of energy consumption has both environmental and economic ramifications because it can cancel out the cost savings gained from eliminating a third-party authority and related processes and fees. challenges for wider adoption there are growing interests in blockchain among information professionals, but there are also some obstacles to those interests gaining momentum and moving further towards wider trial and adoption. one obstacle is the lack of general understanding about blockchain in a larger audience of information professionals. due to its original association with bitcoin, many mistake blockchain for cryptocurrency. another obstacle is technical. the use of blockchain requires setting up and running a node in a blockchain network, such as ethereum[ ], which may be daunting to those who are not tech-savvy. this makes a barrier to entry high to those who are not familiar with command line scripting and yet still want to try out and test how a blockchain functions. the last and most important obstacle is the lack of compelling use cases for libraries, archives, and museums. to many, blockchain is an interesting new technology. but even many blockchain enthusiasts are skeptical of its practical benefits at this point when all associated costs are considered. of course, this is not an insurmountable obstacle. the more people get familiar with blockchain, the more ways people will discover to use blockchain in the information profession that are uniquely beneficial for specific purposes. suggestions for compelling use cases of blockchain in order to determine what may make a compelling use case of blockchain, the information profession would benefit from considering the following. what kind of data/records (or the series thereof) must be stored and preserved exactly the way they were created. what kind of information is at great risk to be altered and compromised by changing circumstances. what type of interactions may need to take place between such data/records and their users.[ ] how much would be a reasonable cost for implementation. these will help connecting the potential benefits of blockchain with real-world use cases and take the information profession one step closer to its wider testing and adoption. to those further interested in blockchain and libraries, i recommend the recordings from the library . online mini-conference, “blockchain applied: impact on the information profession,” held back in june. the blockchain national forum, which is funded by imls and is to take place in san jose, ca on august th, will also be livestreamed. notes [ ] for an excellent introduction to blockchain, see “the great chain of being sure about things,” the economist, october , , https://www.economist.com/news/briefing/ -technology-behind-bitcoin-lets-people-who-do-not-know-or-trust-each-other-build-dependable. [ ] justin ramos, “blockchain: under the hood,” thoughtworks (blog), august , , https://www.thoughtworks.com/insights/blog/blockchain-under-hood. [ ] the world food programme, the food-assistance branch of the united nations, is using blockchain to increase their humanitarian aid to refugees. blockchain may possibly be used for not only financial transactions but also the identity verification for refugees. russ juskalian, “inside the jordan refugee camp that runs on blockchain,” mit technology review, april , , https://www.technologyreview.com/s/ /inside-the-jordan-refugee-camp-that-runs-on-blockchain/. [ ] joanne cleaver, “could blockchain technology transform homebuying in cook county — and beyond?,” chicago tribune, july , , http://www.chicagotribune.com/classified/realestate/ct-re- -blockchain-homebuying- -story.html. [ ] “ % attack,” investopedia, september , , https://www.investopedia.com/terms/ / -attack.asp. [ ] sherman lee, “bitcoin’s energy consumption can power an entire country — but eos is trying to fix that,” forbes, april , , https://www.forbes.com/sites/shermanlee/ / / /bitcoins-energy-consumption-can-power-an-entire-country-but-eos-is-trying-to-fix-that/# ff aa bc . [ ] osita chibuike, “how to setup an ethereum node,” the practical dev, may , , https://dev.to/legobox/how-to-setup-an-ethereum-node- a . [ ] the interaction can also be a self-executing program when certain conditions are met in a blockchain ledger. this is called a “smart contract.” see mike orcutt, “states that are passing laws to govern ‘smart contracts’ have no idea what they’re doing,” mit technology review, march , , https://www.technologyreview.com/s/ /states-that-are-passing-laws-to-govern-smart-contracts-have-no-idea-what-theyre-doing/. posted in: coding, library, technology. tagged: bitcoin · blockchain · distributed ledger technology · dlt taking diversity to the next level dec th, by bohyun (library hat). comments are off for this post ** this post was also published in acrl techconnect on dec. , .*** “building bridges in a divisive climate: diversity in libraries, archives, and museums,” panel discussion program held at the university of rhode island libraries on thursday november , . getting minorities on board i recently moderated a panel discussion program titled “building bridges in a divisive climate: diversity in libraries, archives, and museums.” participating in organizing this program was interesting experience. during the whole time, i experienced my perspective constantly shifting back and forth as (i) someone who is a woman of color in the us who experiences and deals with small and large daily acts of discrimination, (ii) an organizer/moderator trying to get as many people as possible to attend and participate, and (iii) a mid-career librarian who is trying to contribute to the group efforts to find a way to move the diversity agenda forward in a positive and inclusive way in my own institution. in the past, i have participated in multiple diversity-themed programs either as a member of the organizing committee or as an attendee and have been excited to see colleagues organize and run such programs. but when asked to write or speak about diversity myself, i always hesitated and declined. this puzzled me for a long time because i couldn’t quite pinpoint where my own resistance was coming from. i am writing about this now because i think it may shed some light on why it is often difficult to get minorities on board with diversity-related efforts. a common issue that many organizers experience is that often these diversity programs draw many allies who are already interested in working on the issue of diversity, equity, and inclusion but not necessarily a lot of those who the organizers consider to be the target audience, namely, minorities. what may be the reason? perhaps i can find a clue for the answer to this question from my own resistance regarding speaking or writing about diversity, preferring rather to be in the audience with a certain distance or as an organizer helping with logistics behind the scene. to be honest, i always harbored a level of suspicion about how much of the sudden interests in diversity is real and how much of it is simply about being on the next hot trend. trends come and go, but issues lived through many lives of those who belong to various systematically disadvantaged and marginalized groups are not trends. although i have been always enthusiastic about participating in diversity-focused programs as attendees and was happy to see diversity, equity, and inclusion discussed in articles and talks, i wasn’t ready to sell out my lived experience as part of a hot trend, a potential fad. to be clear, i am not saying that any of the diversity-related programs or events were asking speakers or authors to be a sell-out. i am only describing how things felt to me and where my own resistance was originating. i have been and am happy to see diversity discussed even as a one-time fad. better a fad than no discussion at all. one may argue that that diversity has been actively discussed for quite some time now. a few years, maybe several, or even more. some of the prominent efforts to increase diversity in librarianship i know, for example, go as far back as when oregon state university libraries sponsored two scholarships to the code lib conference, one for women and the other for minorities, which have continued from then on as the code lib diversity scholarship. but if one has lived the entire life as a member of a systematically disadvantaged group either as a woman, a person of color, a person of certain sexual orientation, a person of a certain faith, a person with a certain disability, etc., one knows better than expecting some sudden interests in diversity to change the world we live in and most of the people overnight. i admit i have been watching the diversity discussion gaining more and more traction in librarianship with growing excitement and concern at the same time. for i felt that all of what is being achieved through so many people’s efforts may get wiped out at any moment. the more momentum it accrues, i worried, the more serious backlash it may come to face. for example, it was openly stated that seeking racial/ethnic diversity is superficial and for appearance’s sake and that those who appear to belong to “team diversity” do not work as hard as those in “team mainstream.” people make this type of statements in order to create and strengthen a negative association between multiple dimensions of diversity that are all non-normative (such as race/ethnicity, religion, sexual orientation, immigration status, disability) and unfavorable value judgements (such as inferior intellectual capacity or poor work ethic).  according to this kind of flawed reasoning, a tech company whose entire staff consists of twenty-something white male programmers with a college degree, may well have achieved a high level of diversity because the staff might have potentially (no matter how unlikely) substantial intellectual and personal differences in their thinking, background, and experience, and therefore their clear homogeneity is no real problem. that’s just a matter of trivial “appearance.” the motivation behind this kind of intentional misdirection is to derail current efforts towards expanding diversity, equity, and inclusion by taking people’s attention away from the real issue of systematic marginalization in our society. of course, the ultimate goal of all diversity efforts should be not the mere inclusion of minorities but enabling them to have agency as equal as the agency those privileged already possess. but note that objections are being raised against mere inclusion. anti-diversity sentiment is real, and people will try to rationalize it in any way they can. then of course, the other source of my inner resistance to speaking or writing about diversity has been the simple fact that thinking about diversity, equity, and inclusion does not take me to a happy place. it reminds me of many bad experiences accumulated over time that i would rather not revisit. this is why i admire those who have spoken and written about their lived experience as a member of a systematically discriminated and marginalized group. their contribution is a remarkably selfless one. i don’t have a clear answer to how this reflection on my own resistance against actively speaking or writing about diversity will help future organizers. but clearly, being asked to join many times had an effect since i finally did accept the invitation to moderate a panel and wrote this article. so, if you are serious about getting more minorities – whether in different religions, genders, disabilities, races, etc. – to speak or write on the issue, then invite them and be ready to do it over and over again even if they decline. don’t expect that they will trust you at the first invitation. understand that by accepting such an invitation, minorities do risk far more than non-minorities will ever do. the survey i ran for the registrants of the “building bridges in a divisive climate: diversity in libraries, archives, and museums” panel discussion program showed several respondents expressing their concern about the backlash at their workplaces that did or may result from participating in diversity efforts as a serious deterrent. if we would like to see more minorities participate in diversity efforts, we must create a safe space for everyone and take steps to deal with potential backlash that may ensue afterwards. a gentle intro or a deep dive? another issue that many organizers of diversity-focused events, programs, and initiatives struggle with is two conflicting expectations from their audience. on one hand, there are those who are familiar with diversity, equity, and inclusion issues and want to see how institutions and individuals are going to take their initial efforts to the next level. these people often come from organizations that already implemented certain pro-diversity measures such as search advocates for the hiring process. and educational programs that familiarize the staff with the topic of diversity, equity, and inclusion. on the other hand, there are still many who are not quite sure what diversity, equity, and inclusion exactly mean in a workplace or in their lives. those people would continue to benefit from a gentle introduction to things such as privilege, microaggression, and unconscious biases. the feedback surveys collected after the “building bridges in a divisive climate: diversity in libraries, archives, and museums” panel discussion program showed these two different expectations. some people responded that they deeply appreciated the personal stories shared by the panelists, noting that they did not realize how often minorities are marginalized even in one day’s time. others, however, said they would be like to hear more about actionable items and strategies that can be implemented to further advance the values of diversity, equity, and inclusion that go beyond personal stories. balancing these two different demands is a hard act for organizers. however, this is a testament to our collective achievement that more and more people are aware of the importance of continuing efforts to improve diversity, equity, and inclusion in libraries, archives, and museums. i do think that we need to continue to provide a general introduction to diversity-related issues, exposing people to everyday experience of marginalized groups such as micro-invalidation, impostor syndrome, and basic concepts like white privilege, systematic oppression, colonialism, and intersectionality. one of the comments we received via the feedback survey after our diversity panel discussion program was that the program was most relevant in that it made “having colleagues attend with me to hear what i myself have never told them” possible. general programs and events can be an excellent gateway to more open and less guarded discussion. at the same time, it seems to be high time for us in libraries, museums, and archives to take a deep dive into different realms of diversity, equity, and inclusion as well. diversity comes in many dimensions such as age, disability, religion, sexual orientation, race/ethnicity, and socioeconomic status. many of us feel more strongly about one issue than others. we should create opportunities for ourselves to advocate for specific diversity issues that we care most. the only thing i would emphasize is that one specific dimension of diversity should not be used as an excuse to neglect others. exploring socioeconomic inequality issues without addressing how they work combined with the systematic oppression of marginalized groups such as native americans, women, or immigrants at the same time can be an example of such a case. all dimensions of diversity are closely knitted with one another, and they do not exist independently. for this reason, a deep dive into different realms of diversity, equity, and inclusion must be accompanied by the strong awareness of their intersectionality. recommendations and resources for future organizers organizing a diversity-focused program takes a lot of effort. while planning the “building bridges in a divisive climate: diversity in libraries, archives, and museums” panel discussion program at the university of rhode island libraries, i worked closely with my library dean, karim boughida, who originally came up with the idea of having a panel discussion program at the university of rhode island libraries, and renee neely in the libraries’ diversity initiatives for approximately two months. for panelists, we decided to recruit as many minorities from diverse institutions and backgrounds. we were fortunate to find panelists from a museum, an archive, both a public and an academic library with varying degrees of experience in the field from only a few years to over twenty-five years, ranging from a relatively new archivist to an experienced museum and a library director. our panel consisted of one-hundred percent people of color. the thoughts and perspectives that those panelists shared were, as a result, remarkably diverse and insightful. for this reason, i recommend spending some time to get the right speakers for your program if your program will have speakers. discussion at the “building bridges in a divisive climate: diversity in libraries, archives, and museums,” at the university of rhode island libraries another thing i would like to share is the questions that i created for the panel discussion. even though we had a whole hour, i was able to cover only several of them. but since i discussed all these questions in advance with the panelists and they helped me put a final touch on some of those, i think these questions can be useful to future organizers who may want to run a similar program. they can be utilized for a panel discussion, an unconference, or other types of programs. i hope this is helpful and save time for other organizers. sample questions for the diversity panel discussion why should libraries, archives, museums pay attention to the issues related to diversity, equity, and inclusion? in what ways do you think the lack of diversity in our profession affects the perception of libraries, museums, and archives in the communities we serve? do you have any personal or work-related stories that you would like to share that relate to diversity, equity, and inclusion issues? how did you get interested in diversity, equity, and inclusion issues? suppose you discovered that your library’s, archive’s or museum’s collection includes prejudiced information, controversial objects/ documents, or hate-inducing material. what would you do? suppose a group of your library / archive / museum patrons want to use your space to hold a local gathering that involves hate speech. what would you do? what would you be mostly concerned about, and what would the things that you would consider to make a decision on how you will respond? do you think libraries, archives, and museums are a neutral place? what do you think neutrality means to a library, an archive, a museum in practice in a divisive climate such as now? what are some of the areas in libraries, museums, and archives where you see privileges and marginalization function as a barrier to achieving our professional values – equal access and critical thinking?  what can we do to remove those barriers? could you tell us how colonialist thinking and practice are affecting libraries, museums, and archives either consciously or unconsciously?  since not everyone is familiar with what colonialism is, please begin with first your brief interpretation of what colonialist thinking or practice look like in libraries, museums, and archives first? what do you think libraries, archives, and museums can do more to improve critical thinking in the community that we serve? although libraries, archives, museums have been making efforts to recruit, hire, and retain diverse personnel in recent years, the success rate has been relatively low. for example, in librarianship, it has been reported that often those hired through these efforts experienced backlash at their own institutions, were subject to unrealistic expectations, and met with unsupportive environment, which led to burnout and a low retention rate of talented people. from your perspective – either as a manager hiring people or a relatively new librarian who looked for jobs – what do you think can be done to improve this type of unfortunate situation? many in our profession express their hesitation to actively participate in diversity, equity, and inclusion-related discussion and initiatives at their institutions because of the backlash from their own coworkers. what do you think we can do to minimize such backlash? some people in our profession express strong negative feelings regarding diversity, equity, and inclusion-related initiatives. how much of this type of anti-diversity sentiment do you think exist in your field? some worry that this is even growing faster in the current divisive and intolerant climate. what do you think we can do to counter such anti-diversity sentiment? there are many who are resistant to the values of diversity, equity, and inclusion. have you taken any action to promote and advance these values facing such resistance? if so, what was your experience like, and what would be some of the strategies you may recommend to others working with those people? many people in our profession want to take our diversity, equity, and inclusion initiatives to the next level, beyond offering mere lip service or simply playing a numbers game for statistics purpose. what do you think that next level may be? lastly, i felt strongly about ensuring that the terms and concepts often thrown out in diversity/equity/inclusion-related programs and events – such as intersectionality, white privilege, microaggression, patriarchy, colonialism, and so on – are not used to unintentionally alienate those who are unfamiliar with them. these concepts are useful and convenient shortcuts that allow us to communicate a large set of ideas previously discussed and digested, so that we can move our discussion forward more efficiently. they should not make people feel uncomfortable nor generate any hint of superiority or inferiority. to this end, i create a pre-program survey which all program registrants were encouraged to take. my survey simply asked people how familiar and how comfortable they are with a variety of terms. at the panel discussion program, we also distributed the glossary of these terms, so that people can all become familiar with them. also, videos can quickly bring all attendees up-to-speed with some basic concepts and phenomena in diversity discussion. for example, in the beginning of our panel discussion program, i played two short videos, “life of privilege explained in a $ race” and “what if we treated white coworkers the way we treat minority coworkers?”, which were well received by the attendees. i am sharing the survey questions, the video links, and the glossary in the hope that they may be helpful as a useful tool for future organizers. for example, one may decide to provide a glossary like this before the program or run an unconference that aims at unpacking the meanings of these terms and discussing how they relate to people’s daily lives. in closing: diversity, libraries, technology, and our own biases disagreements on social issues are natural. but the divisiveness that we are currently experiencing seems to be particularly intense. this deeply concerns us, educators and professionals working in libraries, archives, and museums. libraries, archives, and museums are public institutions dedicated to promoting and advancing civic values. diversity, equity, and inclusion are part of those core civic values that move our society forward. this task, however, has become increasingly challenging as our society moves in a more and more divisive direction. to make matters even more complicated, libraries, archives, museums in general lack diversity in their staff composition. this homogeneity can impede achieving our own mission. according to the recent report from ithaka s+r released this august, we do not appear to have gotten very far. their report “inclusion, diversity, and equity: members of the association of research (arl) libraries – employee demographics and director perspectives,” shows that libraries and library leadership/administration are both markedly white-dominant ( % and % white non-hispanic respectively). also, while librarianship in general are female dominant ( %), the technology field in libraries is starkly male ( %) along with makerspace ( %), facilities ( %), and security ( %) positions. the survey results in the report show that while the majority of library directors say there are barriers to achieving more diversity in their library, they attribute those barriers to external rather than internal factors such as the library’s geographic location and the insufficiently diverse application pool resulting from the library’s location. what is fascinating, however, is that this directly conflicts with the fact that libraries do show little variation in the ratio of white staff based on degree of urbanization. equally interesting is that the staff in more homogeneous and less diverse (over % white non-hispanic) libraries think that their libraries are much more equitable than the library community ( % vs %) and that library directors (and staff) consider their own library to be more equitable, diverse, and inclusive than the library community with respect to almost every category such as race/ethnicity, gender, lgbtq, disabilities, veterans, and religion. while these findings in the ithaka s+r report are based upon the survey results from arl libraries, similar staff composition and attitudes can be assumed to apply to libraries in general. there is a great need for both the library administration and the staff to understand their own unconscious and implicit biases, workplace norms, and organizational culture that may well be thwarting their own diversity efforts. diversity, equity, and inclusion have certainly been a topic of active discussion in the recent years. many libraries have established a committee or a task force dedicated to improving diversity. but how are those efforts paying out? are they going beyond simply paying a lip service? is it making a real difference to everyday experience of minority library workers? can we improve, and if so where and how? where do we go from here? those would be the questions that we will need to examine in order to take our diversity efforts in libraries, archives, and museums to the next level. notes the program description is available at https://web.uri.edu/library/ / / /building-bridges-in-a-divisive-climate-diversity-in-libraries-archives-and-museums/ ↩ carol bean, ranti junus, and deborah mouw, “conference report: code libcon ,” the code lib journal, no. (march , ), http://journal.code lib.org/articles/ . ↩ note that this kind of biased assertions often masquerades itself as an objective intellectual pursuit in academia when in reality, it is a direct manifestation of an existing prejudice reflecting the limited and shallow experience of the person posting the question him/herself. a good example of this is found in the remark in made by larry summers, the former harvard president. he suggested that one reason for relatively few women in top positions in science may be “issues of intrinsic aptitude” rather than widespread indisputable everyday discrimination against women. he resigned after the harvard faculty of arts and sciences cast a vote of no confidence. see scott jaschik, “what larry summers said,” inside higher ed, february , , https://www.insidehighered.com/news/ / / /summers _ . ↩ our pre-program survey questions can be viewed at https://docs.google.com/forms/d/e/ faipqlscp-nqnkhaqli_ pvdidw-dqzraflycdikutu dzjqm f ra/viewform. ↩ for this purpose, asking all participants to respect one another’s privacy in advance can be a good policy. in addition to this, we specifically decided not to stream or record our panel discussion program, so that both panelists and attendees can freely share their experience and thoughts. ↩ a good example is the search advocate program from oregon state university. see http://searchadvocate.oregonstate.edu/. ↩ for an example, see the workshops offered by the office of community, equity, and inclusion of the university of rhode island at https://web.uri.edu/diversity/ced-inclusion-courses-overview/. ↩ for the limitations of the mainstream diversity discussion in lis (library and information science) with the focus on inclusion and cultural competency, see david james hudson, “on ‘diversity’ as anti-racism in library and information studies: a critique,” journal of critical library and information studies , no. (january , ), https://doi.org/https://doi.org/ . /jclis.v i . . ↩ you can see our glossary at https://drive.google.com/file/d/ uci huuytrelgny-dbnsoxf_ilpm n/view?usp=sharing; this glossary was put together by renee neely. ↩ for the nitty-gritty logistical details for organizing a large event with a group of local and remote volunteers, check the organizer’s toolkit created by the #critlib unconference organizers at https://critlib .wordpress.com/organizers-toolkit/. ↩ roger schonfeld and liam sweeney, “inclusion, diversity, and equity: members of the association of research libraries,” ithaka s+r, august , , http://www.sr.ithaka.org/publications/inclusion-diversity-and-equity-arl/. ↩ for the early discussion of diversity-focused recruitment in library technology, see jim hahn, “diversity recruitment in library information technology,” acrl techconnect blog, august , , https://acrl.ala.org/techconnect/post/diversity-recruitment-in-library-information-technology. ↩ see april hathcock, “white librarianship in blackface: diversity initiatives in lis,” in the library with the lead pipe, october , , http://www.inthelibrarywiththeleadpipe.org/ /lis-diversity/ and angela galvan, “soliciting performance, hiding bias: whiteness and librarianship,” in the library with the lead pipe (blog), june , , http://www.inthelibrarywiththeleadpipe.org/ /soliciting-performance-hiding-bias-whiteness-and-librarianship. ↩ posted in: diversity. tagged: equity · inclusion · resources from need to want: how to maximize social impact for libraries, archives, and museums oct th, by bohyun (library hat). comments are off for this post at the ndp at three event organized by imls yesterday, sayeed choudhury on the “open scholarly communications” panel suggested that libraries think about return on impact in addition to return on investment (roi). he further elaborated on this point by proposing a possible description of such impact. his description was that when an object or resource created through scholarly communication efforts is being used by someone we don’t know and is interpreted correctly without contacting us (=libraries, archives, museums etc.), that is an impact; to push that further, if someone uses the object or the resource in a way we didn’t anticipate, that’s an impact; if it is integrated into someone’s workflow, that’s also an impact. this emphasis on impact as a goal for libraries, archives, and museums (or non-profit organizations in general to apply broadly) resonated with me particularly because i gave a talk just a few days ago to a group of librarians at the iolug conference about how libraries can and should maximize their social impact in the context of innovation in the way many social entrepreneurs have been already doing for quite some time. in this post, i would like to revisit one point that i made in that talk. it is a specific interpretation of the idea of maximizing social impact as a conscious goal for libraries, archives, and museums (lam). hopefully, this will provide a useful heuristic for lam institutions in mapping out the future efforts. considering that roi is a measure of cost-effectiveness, i believe impact is a much better goal than roi for lam institutions. we often think that to collect, organize, provide equitable access to, and preserve information, knowledge, and cultural heritage is the goal of a library, an archive, and a museum. but doing that well doesn’t mean simply doing it cost-effectively. our efforts no doubt aim at achieving better-collected, better-organized, better-accessed, and better-preserved information, knowledge, and cultural heritage. however, our ultimate end-goal is attained only when such information, knowledge, and cultural heritage is better used by our users. not simply better accessed, but better used in the sense that the person gets to leverage such information, knowledge, and cultural heritage to succeed in whatever endeavor that s/he was making, whether it be career success, advanced education, personal fulfillment, or private business growth. in my opinion, that’s the true impact that lam institutions should aim at. if that kind of impact were a destination, cost-effectiveness is simply one mode of transportation, preferred one maybe but not quite comparable to the destination in terms of importance. but what does “better used” exactly mean? “integrated into people’s workflow” is a hint; “unanticipated use” is another clue. if you are like me and need to create and design that kind of integrated or unanticipated use at your library, archive, or museum, how will you go about that? this is the same question we ask over and over again. how do you plan and implement innovation? yes, we will go talk to our users, ask what they would like to see, meet with our stakeholders and find out their interests and concerns are, discuss ourselves what we can do to deliver things that our users want, and go from there to another wonderful project we work hard for. then after all that, we reach a stage where we stop and wonder where that “greater social impact” went in almost all our projects. and we frantically look for numbers. how many people accessed what we created? how many downloads? what does the satisfaction survey say? in those moments, how does the “impact” verbiage help us? how does that help us in charting our actual path to creating and maximizing our social impact more than the old-fashioned “roi” verbiage? at least roi is quantifiable and measurable. this, i believe, is why we need a more concrete heuristic to translate the lofty “impact” to everyday “actions” we can take. maybe not quite as specific as to dictate what exactly those actions are at each project level but a bit more specific to enable us to frame the value we are attempting to create and deliver at our lam institutions beyond cost-effectiveness. i think the heuristic we need is the conversion of need to demand. what is an untapped need that people are not even aware of in the realm of information, knowledge, and cultural heritage? when we can identify any such need in a specific form and successfully convert that need to a demand, we make an impact. by “demand,” i mean the kind of user experience that people will desire and subsequently fulfill by using that object, resource, tool, service, etc., we create at our library, archive, and museum. (one good example of such desirable ux that comes to my mind is nypl photo booth: https://www.nypl.org/blog/ / / /snapshots-nypl.) when we create a demand out of such an untapped need, when the fulfillment of that kind of demand effectively creates, strengthens, and enriches our society in the direction of information, knowledge, evidence-based decisions, and truth being more valued, promoted, and equitably shared, i think we get to maximize our social impact. in the last “going forward” panel where the information discovery was discussed, loretta parham pointed out that in the corporate sector, information finds consumers, not the other way. by contrast, we (by which i mean all of us working at lam institutions) still frame our value in terms of helping and supporting users access and use our material, resources, and physical and digital objects and tools. this is a mistake in my opinion, because it is a self-limiting value proposition for libraries, archives, and museums. what is the point of us lam institutions, working so hard to get the public to use their resources and services? the end goal is so that we can maximize our social impact through such use. the rhetoric of “helping and supporting people to access and use our resources” does not adequately convey that. businesses want their clients to use their goods and services, of course. but their real target is the making of profit out of those uses, aka purchases. similarly, but far more importantly, the real goal of libraries, archives and museums is to move the society forward, closer in the direction of knowledge, evidence-based decisions, and truth being more valued, promoted, and equitably shared. one person at a time, yes, but the ultimate goal reaching far beyond individuals. the end goal is maximizing our impact on this side of the public good.   posted in: librarianship, library, management, usability, user experience. tagged: archives · change · d d · design thinking · digital collection · goal · impact · innovation · libraries · museums · ndpthree · social entrepreneurship · ux how to price d printing service fees may nd, by bohyun (library hat). comments are off for this post ** this post was originally published in acrl techconnect on may. , .*** many libraries today provide d printing service. but not all of them can afford to do so for free. while free d printing may be ideal, it can jeopardize the sustainability of the service over time. nevertheless, many libraries tend to worry about charging service fees. in this post, i will outline how i determined the pricing schema for our library’s new d printing service in the hope that more libraries will consider offering d printing service if having to charge the fee is a factor stopping them. but let me begin with libraries’ general aversion to fees. a d printer in action at the health sciences and human services library (hs/hsl), univ. of maryland, baltimore service fees are not your enemy charging fees for the library’s service is not something librarians should regard as a taboo. we live in the times in which a library is being asked to create and provide more and more new and innovative services to help users successfully navigate the fast-changing information landscape. a makerspace and d printing are certainly one of those new and innovative services. but at many libraries, the operating budget is shrinking rather than increasing. so, the most obvious choice in this situation is to aim for cost-recovery. it is to be remembered that even when a library aims for cost-recovery, it will be only partial cost-recovery because there is a lot of staff time and expertise that is spent on planning and operating such new services. libraries should not be afraid to introduce new services requiring service fees because users will still benefit from those services often much more greatly than a commercial equivalent (if any). think of service fees as your friend. without them, you won’t be able to introduce and continue to provide a service that your users need. it is a business cost to be expected, and libraries will not make profit out of it (even if they try). still bothered? almost every library charges for regular (paper) printing. should a library rather not provide printing service because it cannot be offered for free? library users certainly wouldn’t want that. determining your service fees what do you need in order to create a pricing scheme for your library’s d printing service? (a) first, you need to list all cost-incurring factors. those include (i) the equipment cost and wear and tear, (ii) electricity, (iii) staff time & expertise for support and maintenance, and (iv) any consumables such as d print filament, painter’s tape. remember that your new d printer will not last forever and will need to be replaced by a new one in - years. also, some of these cost-incurring factors such as staff time and expertise for support is fixed per d print job. on the other hand, another cost-incurring factor, d print filament, for example, is a cost factor that increases in proportion to the size/density of a d model that is printed. that is, the larger and denser a d print model is, the more filament will be used incurring more cost. (b) second, make sure that your pricing scheme is readily understood by users. does it quickly give users a rough idea of the cost before their d print job begins? an obscure pricing scheme can confuse users and may deter them from trying out a new service. that would be bad user experience. also in d printing, consider if you will also charge for a failed print. perhaps you do. perhaps you don’t. maybe you want to charge a fee that is lower than a successful print. whichever one you decide on, have that covered since failed prints will certainly happen. (c) lastly, the pricing scheme should be easily handled by the library staff. the more library staff will be involved in the entire process of a library patron using the d printing service from the beginning to the end, the more important this becomes. if the pricing scheme is difficult for the staff to work with when they need charge for and process each d print job, the new d printing service will increase their workload significantly. which staff will be responsible for which step of the new service? what would be the exact tasks that the staff will need to do? for example, it may be that several staff at the circulation desk need to learn and handle new tasks involving the d printing service, such as labeling and putting away completed d models, processing the payment transaction, delivering the model, and marking the job status for the paid d print job as ‘completed’ in the d printing staff admin portal if there is such a system in place. below is the screenshot of the hs/hsl d printing staff admin portal developed in-house by the library it team. the hs/hsl d printing staff admin portal, university of maryland, baltimore examples – d printing service fees it’s always helpful to see how other libraries are doing when you need to determine your own pricing scheme. here are some examples that shows ten libraries’ d printing pricing scheme changed over the recent three years. unr delamare library https://guides.library.unr.edu/ dprinting – $ . per cubic inch of modeling material (raised to $ . starting july, ). – uprint – model material: $ . per cubic inch (= . gm= . lb) – uprint – support materials: $ . per cubic inch ncsu hunt library https://www.lib.ncsu.edu/do/ d-printing -  uprint d printer: $ per cubic inch of material (abs), with a $ minimum – makerbot d printer: $ . per gram of material (pla), with a $ minimum – uprint – $ per cubic inch of material, $ minimum – f – $ . per gram of material, $ minimum southern illinois university library http://libguides.siue.edu/ d/request – originally $ per hour of printing time; reduced to $ as the demand grew. – lulzbot taz , luzbot mini – $ .  per hour of printing time. byu library http://guides.lib.byu.edu/c.php?g= &p= – – makerbot replicator / ultimaker extended $ . per gram for standard ( . mm) resolution; $ . per gram for high ( . mm) resolution. university of michigan library the cube d printer checkout is no longer offered. – cost for professional d printing service; open access d printing is free. gvsu library https://www.gvsu.edu/techshowcase/makerspace- .htm – $ . per gram with a $ . minimum – free (ultimaker +, makerbot replicator , , x) university of tennessee, chattanooga library http://www.utc.edu/library/services/studio/ d-printing/index.php – – makerbot th, th – $ . per gram port washington public library http://www.pwpl.org/ d-printing/ d-printing-guidelines/ – makerbot – $ per hour of printing time miami university – $ . per gram of the finished print; – ? ucla library, dalhousie university library ( ) free types of d printing service fees from the examples above, you will notice that many d printing service fee schemes are based upon the weight of a d-print model. this is because these libraries are trying recover the cost of the d filament, and the amount of filament used is most accurately reflected in the weight of the resulting d-printed model. however, there are a few problems with the weight-based d printing pricing scheme. first, it is not readily calculable by a user before the print job, because to do so, the user will have to weigh a model that s/he won’t have until it is d-printed. also, once d-printed, the staff will have to weigh each model and calculate the cost. this is time-consuming and not very efficient. for this reason, my library considered an alternative pricing scheme based on the size of a d model. the idea was that we will have roughly three different sizes of an empty box – small, medium, and large –  with three different prices assigned. whichever box into which a user’s d printed object fits will determine how much the user will pay for her/his d-printed model. this seemed like a great idea because it is easy to determine how much a model will cost to d-print to both users and the library staff in comparison to the weight-based pricing scheme. unfortunately, this size-based pricing scheme has a few significant flaws. a smaller model may use more filament than a larger model if it is denser (meaning the higher infill ratio). second, depending on the shape of a model, a model that fits  in a large box may use much less filament than the one that fits in a small box. think about a large tree model with think branches. then compare that with a % filled compact baseball model that fits into a smaller box than the tree model does. thirdly, the resolution that determines a layer height may change the amount of filament used even if what is d-printed is a same model. different infill ratios – image from https://www.packtpub.com/sites/default/files/article-images/ os_ _ .png charging based upon the d printing time so we couldn’t go with the size-based pricing scheme. but we did not like the problems of the weight-based pricing scheme, either. as an alternative, we decided to go with the time-based pricing scheme because printing time is proportionate to how much filament is used, but it does not require that the staff weigh the model each time. a d-printing software gives an estimate of the printing time, and most d printers also display actual printing time for each model printed. first, we wanted to confirm the hypothesis that d printing time and the weight of the resulting model are proportionate to each other. i tested this by translating the weight-based cost to the time-based cost based upon the estimated printing time and the estimated weight of several cube models. here is the result i got using the makerbot replicator x. . gm/ min= . gm per min. . gm/ min= . gm per min. . gm/ min=  . gm per min. . gm/ min= . gm per min. . gm/ min= . gm per min. . gm/ min= . gm per min. there is some variance, but the hypothesis holds up. based upon this, now let’s calculate the d printing cost by time. d plastic filament is $ for abs/pla and $ for the dissolvable per . kg  (= . lb) from makerbot. that means that filament cost is $ . per gram for abs/pla and $ .  per gram for the dissolvable. so, d filament cost is cents per gram on average. finalizing the service fee for d printing for an hour of d printing time, the amount of filament used would be . gm (= .  x min). this gives us the filament cost of cents per hour of d printing (= . gm x cents). so, for the cost-recovery of filament only, i get roughly $ per hour of d printing time. earlier, i mentioned that filament is only one of the cost-incurring factors for the d printing service. it’s time to bring in those other factors, such as hardware wear/tear, staff time, electricity, maintenance, etc., plus “no-charge-for-failed-print-policy,” which was adopted at our library. those other factors will add an additional amount per d print job. and at my library, this came out to be about $ . (i will not go into details about how these have been determined because those will differ at each library.) so, the final service fee for our new d printing service was set to be $ up to hour of d printing + $ per additional hour of d printing. the $ is broken down to $ per hour of d printing that accounts for the filament cost and $ fixed cost for every d print job. to help our users to quickly get an idea of how much their d print job will cost, we have added a feature to the hs/hsl d print job submission form online. this feature automatically calculates and displays the final cost based upon the printing time estimate that a user enters.   the hs/hsl d print job submission form, university of maryland, baltimore don’t be afraid of service fees i would like to emphasize that libraries should not be afraid to set service fees for new services. as long as they are easy to understand and the staff can explain the reasons behind those service fees, they should not be a deterrent to a library trying to introduce and provide a new innovative service. there is a clear benefit in running through all cost-incurring factors and communicating how the final pricing scheme was determined (including the verification of the hypothesis that d printing time and the weight of the resulting model are proportionate to each other) to all library staff who will be involved in the new d printing service. if any library user inquire about or challenges the service fee, the staff will be able to provide a reasonable explanation on the spot. i implemented this pricing scheme at the same time as the launch of my library’s makerspace (the hs/hsl innovation space at the university of maryland, baltimore – http://www.hshsl.umaryland.edu/services/ispace/) back in april . we have been providing d printing service and charging for it for more than two years. i am happy to report that during that entire duration, we have not received any complaint about the service fee. no library user expected our new d printing service to be free, and all comments that we received regarding the service fee were positive. many expressed a surprise at how cheap our d printing service is and thanked us for it. to summarize, libraries should be willing to explore and offer new innovating services even when they require charging service fees. and if you do so, make sure that the resulting pricing scheme for the new service is (a) sustainable and accountable, (b) readily graspable by users, and (c) easily handled by the library staff who will handle the payment transaction. good luck and happy d printing at your library! an example model with the d printing cost and the filament info displayed at the hs/hsl, university of maryland, baltimore posted in: library, management, technology, user experience. tagged: d printer · d printing · budget · charge · cost · funding · makerspace · service fees · sustainability · user experience · ux post-election statements and messages that reaffirm diversity nov th, by bohyun (library hat). comments are off for this post these are statements and messages sent out publicly or internally to re-affirm diversity, equity, and inclusion by libraries or higher ed institutions. i have collected these – some myself and many others through my fellow librarians. some of them were listed on my blog post, “finding the right words in post-election libraries and higher ed.” so there are some duplicates. if you think that your organization is already so much pro-diversity that there is no need to confirm or re-affirm diversity, you can’t be farther from the everyday reality that minorities experience. sometimes, saying isn’t much. but right now, saying it out loud can mean everything. if you support those who belong to minority groups but don’t say it out loud, how would they know it? right now, nothing is obvious other than there is a lot of hate and violence towards minorities. feel free to use these as your resource to craft a similar message. feel free to add if you have similar messages you have received or created in the comments section. if you haven’t heard from the organization you belong to, please ask for a message reaffirming and committing to diversity, equity, and inclusion. [update / / : statements from ala and lita have been released. i have added them below.] i will continue to add additional statements as i find them. if you see anything missing, please add below in the comment or send it via twitter @bohyunkim. thanks! from librarians but i know that there will be libraries librarian zoe fisher to other librarians care for one another director chris bourg to the mit libraries staff finding the right words in post-election libraries and higher ed (my e-mail sent to the it team at university of maryland, baltimore health sciences and human services library) with a a pin and a prayer dean k. g. schneider to the sonoma state university library staff from library associations lita ala pla arl dlf code lib [draft in github] from libraries james madison university libraries northwestern university libraries university of oregon libraries from higher ed institutions clarke university cuny duke universitymit loyola university, maryland northwestern university penn state university the catholic university of america university of california university of michigan university of nebraska, lincoln university of nevada, reno university of oregon university of rochester and rochester institute of technology university of florida addressing racially charged flyers on the campus marshall university president jerome a. gilbert’s statement regarding post-election tweet drexel university moving on as a community after the election dear members of the drexel community, it is heartening to me to see the drexel community come together over the last day to digest the news of the presidential election — and to do so in the spirit of support and caring that is so much a part of this university. we gathered family-style, meeting in small, informal groups in several places across campus, including the student center for inclusion and culture, our residence halls, and as colleagues over a cup of coffee. many student leaders, particularly from our multicultural organizations, joined the conversation. this is not a process that can be completed in just one day, of course. so i hope these conversations will continue as long as students, faculty and professional staff feel they are needed, and i want to assure you that our professional staff in student life, human resources, faculty affairs, as well as our colleagues in the lindy center for civic engagement, will be there for your support. without question, many members of our community were deeply concerned by the inflammatory rhetoric and hostility on the campaign trail that too often typified this bitter election season. as i wrote over the summer, the best response to an uncertain and at times deeply troubling world is to remain true to our values as an academic community. in the context of a presidential election, it is vital that we understand and respect that members of our broadly diverse campus can hold similarly diverse political views. the expression of these views is a fundamental element of the free exchange of ideas and intellectual inquiry that makes drexel such a vibrant institution. at the same time, drexel remains committed to ensuring a welcoming, inclusive, and respectful environment. those tenets are more important than ever. while we continue to follow changes on the national scene, it is the responsibility of each of us at drexel to join together to move ahead, unified in our commitment to open dialogue, civic engagement and inclusion. i am grateful for all you do to support drexel as a community that welcomes and encourages all of its members. lane community college good morning, colleagues, i am in our nation’s capital today. i’d rather be at home! like me, i am guessing that many of you were glued to the media last night to find out the results of the election. though we know who our next president will be, this transition still presents a lot of uncertainty. it is not clear what our future president’s higher education policies will be but we will be working with our national associations to understand and influence where we can. during times like this there is an opening for us to decide how we want to be with each other. moods will range from joy to sadness and disbelief. it seems trite but we do need to work together, now more than ever. as educators we have a unique responsibility to create safe learning environments where every student can learn and become empowered workers and informed citizens. this imperative seems even more important today. our college values of equity and inclusion have not changed and will not change and it is up to each of us to assure that we live out our values in every classroom and in each interaction. preparing ourselves and our students for contentious discussions sparked by the election is work we must do. it is quite likely that some of our faculty, staff and students may be feeling particularly vulnerable right now. can we reach out to each other and let each other know that we all belong at lane? during my inservice remarks i said that “we must robustly reject the calculated narrative of cynicism, division and despair. instead of letting this leak into our narratives, together we can bet on hope not fear, respect not hate, unity not division.” at lane we have the intellect (and proud of it) and wherewithal to do this. i am attaching a favorite reading from meg wheatley which is resonating with me today and will end with gary snyder’s words from to the children …..stay together learn the flowers go light. maryland institute college of art post-election community forums and support dear campus community, no matter how each of us voted yesterday, most of us likely agree that the presidential campaign has been polarizing on multiple fronts. as a result, today is a difficult day for our nation and our campus community. in our nation, regardless of how one has aligned with a candidate, half of our country feels empowered and the other half sad and perhaps angry. because such dynamics and feelings need to be addressed and supported on campus, this memo outlines immediate resources for our community of students, faculty and staff, and describes opportunities for fashioning dialogues and creative actions going forward. before sharing the specifics, let me say unambiguously that mica will always stand firm in our commitment to diversity and inclusion. this morning’s presidential task force on diversity, inclusion, equity, and globalization meeting discussed measures to ensure that, as a creative community, we will continue to build a culture where everyone is honored and supported for success. the impact of exhibitions such as the current baltimore rising show remains as critical as ever, and mica fosters an educational environment that is welcoming of all. in the short term our focus is to support one another. whether you are happy or distressed with the results, there has been sufficient feedback to indicate that our campus community is struggling with how to make sense of such a divisive election process. you may find the following services helpful and are encouraged to take advantage of them: for students: student counseling maintains walk-in hours from : – : pm every day. students are welcome to stop by the student counseling center ( mt. royal avenue) during that time or call - - and enter x once the recording begins to schedule an appointment. for faculty and staff: the employee assistance program (eap) is available to provide free, confidential support hours a day. the eap can be reached by calling - - - or visiting healthadvocate.com/members and providing the username “maryland institute college of art”. for all mica community members: mica’s chaplain, the rev, maintains standing hours every monday and can be reached in the reflection room (meyerhoff house) or by calling the office of diversity and intercultural development at - - . there are three events this week that can provide a shared space for dialogue; all are welcome: the “after the baltimore uprising: still waiting for change” community forum attached to the baltimore rising exhibition takes place tonight from : pm to : pm in the lazarus center. an open space for all mica community members will be hosted by the black student union tonight at : pm in the meyerhoff house underground. in partnership with our student nami group, mica will host a “messages of hope” event for the entire mica community that will allow for shared space and reflection. this event will be on friday, november th, and will begin at : pm in cohen plaza. in various upcoming meetings we look forward to exploring with campus members other appropriate activities that can be created to facilitate expressions and dialogues. a separate communication is coming from provost david bogen to the faculty regarding classroom conversations with students regarding the election. northwestern university women’s center dear northwestern students, faculty, staff and community members: the women’s center is open today. our staff members are all here and available to talk, to provide resources and tools, or to help however you might need it. most importantly, the space itself is available for whatever you need, whether that is to gather as a group, to sit alone somewhere comfortable and quiet, or to talk to someone who will listen. we are still here, and we are here for all people as an intentionally intersectional space. you are welcome to drop by physically, make a call to our office, or send an email. know that this space is open and available to you. portland community college to the pcc staff as someone who spent the last several years in washington d.c. working to advance community colleges, i feel a special poignancy today hearing so many students, colleagues, and friends wonder and worry about the future—and about their futures. we must acknowledge that this political season has highlighted deep divisions in our society. today i spent time with cabinet speaking about how we can assert our shared values and take positive action as a pcc community to deepen our commitment to equity, inclusion and civic engagement. pcc will always welcome students and colleagues who bring a rich array of perspectives and experiences. that diversity is among our greatest strengths. today it is imperative that we stand by faculty, staff and students who may be experiencing fear or uncertainty—affirming with our words and deeds that pcc is about equitable student success and educational opportunity for all. never has this mission been more powerful or more essential. i have only been here a few months, but have already learned that pcc is a remarkable and caring community. much is happening right now in real time, and i appreciate the efforts of all. for my part, i promise to communicate often as we continue to plan for our shared future. p.s. today and in the days ahead, we will be holding space for people to be together in community. here are a few of the opportunities identified so far. portland community college to students dear students: as someone who spent the last several years working in washington d.c., i feel a special poignancy this week hearing many of you express worry and uncertainty about the future. there is little doubt that this political season has highlighted some deep divisions in our society. both political candidates have acknowledged as much. at the same time, people representing the full and diverse spectrum of our country come to our nation’s community colleges in hopes of a better life. pcc is such a place – where every year thousands of students find their path and pursue their dreams. all should find opportunity here, and all should feel safe and welcome. the rich diversity of pcc offers an amazing opportunity for dialogue across difference, and for developing skills that are the foundation of our democratic society. let this moment renew your passion for making a better life for yourself, your community and your country and for becoming the kind of leader you want to follow. rutgers university aaup-aft (american association of university professors – american federation of teachers) resisting donald trump we are shocked and horrified that donald trump, who ran on a racist, xenophobic, misogynist platform, is now the president of the us. in response to this new political landscape, the administrative heads of several universities have issued statements embracing their diverse student, faculty, and staff bodies and offering support and protection. (see statements from the university of california and the california state university). president barchi has yet to address the danger to the rutgers community and its core mission. this afternoon, our faculty union and the rutgers one coalition held an emergency meeting of students, faculty, and community activists in new brunswick. we discussed means of responding to the attacks that people may experience in the near future. most immediately, we approved the following statement by acclamation at the -strong meeting: “rutgers one, a coalition of faculty, staff, students and community members, calls upon the rutgers administration to join us in condemning all acts of bigotry on this campus and refuse to tolerate any attacks on immigrants, women, arabs, muslims, people of color, lgbtq people and all others in our diverse community. we demand that president barchi and his administration provide sanctuary, support, and protection to those who are already facing attacks on our campuses. we need concrete action that can ensure a safe environment for all. further, we commit ourselves to take action against all attempts by the trump administration to target any of our students, staff or faculty. we are united in resistance to bigotry of every kind and welcome all to join us in solidarity.” we also resolved to take the following steps: we will be holding weekly friday meetings at pm in our union office in new brunswick to bring together students, faculty and staff to organize against the trump agenda. we hope to expand these to camden and newark as well. (if you are willing to help organize this, please email back.) we will be creating a list serve to coordinate our work. if you want to join this list, please reply to this email. we are making posters and stickers which declare sanctuaries from racism, xenophobia, sexism, bigotry, religious intolerance, and attacks on unions. once these materials are ready we will write to you so that you may post them on windows, office doors, cars etc. in the meantime, we urge you to talk to your students and colleagues of color as well as women and offer them your support and solidarity. as you may recall, the executive committee issued a denunciation of donald trump on october , . now our slogan, one from the labor movement, is “don’t mourn. organize!” that is where we are now – all the more poignantly because of donald trump’s appeal to workers. let us organize, and let us also expand our calling of education. in your classrooms, your communities, and your families, find the words and sentiments that will redeem all of us from tuesday’s disgrace. university of chicago message from president and provost early in the fall quarter, we sent a message welcoming each of you to the new academic year and affirming our strong commitment to two foundational values of the university – fostering an environment of free expression and open discourse; and ensuring that diversity and inclusion are essential features of the fabric of our campus community and our interactions beyond campus. recent national events have generated waves of disturbing, exclusionary and sometimes threatening behavior around the country, particularly concerning gender and minority status. as a result, many individuals are asking whether the nation and its institutions are entering a period in which supporting the values of diversity and inclusion, as well as free expression and open discourse, will be increasingly challenging. as the president and provost of the university of chicago, we are writing to reaffirm in the strongest possible terms our unwavering commitment to these values, and to the importance of the university as a community acting on these values every day. fulfilling our highest aspirations with respect to these values and their mutual reinforcement will always demand ongoing attention and work on the part of all of us. the current national environment underscores the importance of this work. it means that we need to manifest these values more rather than less, demand more of ourselves as a community, and together be forthright and bold in demonstrating what our community aspires to be. we ask all of you for your help and commitment to the values of diversity and inclusion, free expression, and open discourse and what they mean for each of us working, learning, and living in this university community every day. university of illinois, chicago dear students, faculty, and staff, the events of the past week have come with mixed emotions for many of you. we want you to know that uic remains steadfast in its commitment to creating and sustaining a community that recognizes and values the inherent worth and dignity of every person, while fostering an environment of mutual respect among all members. today, we reaffirm the university’s commitment to access, equity, inclusion and nondiscrimination. critical to this commitment is the work of several offices on campus that provide resources to help you be safe and successful. if you have questions, need someone to talk to, or a place to express yourself, you should consider contacting these offices: office for access and equity (oae). oae is responsible for assuring campus compliance in matters of equal opportunity, affirmative action, and nondiscrimination in the academic and work environment. oae also offers dispute resolution services (drs) to assist with conflict in the workplace not involving unlawful discrimination matters. uic counseling center. the uic counseling center is a primary resource providing comprehensive mental health services that foster personal, interpersonal, academic, and professional thriving for uic students. student legal services. uic’s student legal services (sls) is a full-service law office dedicated to providing legal solutions for currently enrolled students. office of diversity. the office of diversity leads strategic efforts to advance access, equity, and inclusion as fundamental principles underpinning all aspects of university life. it initiates programs that promote an inclusive university climate, partner with campus units to formulate systems of accountability, and develop links with the local community and alumni groups. centers for cultural understanding and social change. the centers for cultural understanding and social change (ccusc) are a collaborative group of seven centers with distinct histories, missions, and locations that promote the well-being of and cultural awareness about underrepresented and underserved groups at uic. uic dialogue initiative. the uic dialogue initiative seeks to build an inclusive campus community where students, faculty, and staff feel welcomed in their identities, valued for their contributions, and feel their identities can be openly expressed. through whatever changes await us, as a learning community we have a special obligation to ensure that our conversations and dialogues over the next weeks and months respect our varied backgrounds and beliefs. university of maryland, baltimore to the umb community: last week, we elected a new president for our country. i think most will agree that the campaign season was long and divisive, and has left many feeling separated from their fellow citizens. in the days since the election, i’ve heard from the leaders of umb and of the university of maryland medical center and of the many programs we operate that serve our neighbors across the city and state. these leaders have relayed stories of students, faculty, staff, families, and children who feel anxious and unsettled, who feel threatened and fearful. it should be unnecessary to reaffirm umb’s commitment to diversity, inclusion, and respect — these values are irrevocable — but when i hear that members of our family are afraid, i must reiterate that the university will not tolerate incivility of any kind, and that the differences we celebrate as a diverse community include not just differences of race, religion, nationality, gender, and sexual identity, but also of experience, opinion, and political affiliation and ideology. if you suffer any harassment, please contact your supervisor or your student affairs dean. in the months ahead, we will come together as a university community to talk about how the incoming administration might influence the issues we care about most: health care access and delivery; education; innovation; social justice and fair treatment for all. we will talk about the opportunities that lay ahead to shape compassionate policy and to join a national dialogue on providing humane care and services that uplift everyone in america. for anyone who despairs, we will talk about building hope. should you want to share how you’re feeling post-election, counselors are available. please contact the student counseling center or the employee assistance program to schedule an appointment. i look forward to continuing this conversation about how we affirm our fundamental mission to improve the human condition and serve the public good. like the values we uphold, this mission endures — irrespective of the person or party in political power. it is our binding promise to the leaders of this state and, even more importantly, to the citizens we serve together. university of west georgia dear colleagues, as we head into the weekend concluding a week, really several weeks, of national and local events, i am reminded of the incredible opportunity of reflection and discourse we have as a nation and as an institution of higher learning. this morning, we held on campus a moving ceremony honoring our veterans–those who have served and who have given the ultimate sacrifice to uphold and protect our freedoms.  it is those freedoms that provide the opportunity to elect a president and those freedoms that provide an environment of civil discourse and opinion.  clearly, the discourse of this election cycle has tested the boundaries. this is an emotional time for many of our faculty, staff, and students.  i ask that as a campus community we hold true to the intended values of our nation and those who sacrificed to protect those values and the core values of our institution–caring, collaboration, inclusiveness, and wisdom.  we must acknowledge and allow the civil discourse and opinion of all within a safe environment.  that is what should set us apart.  it is part of our dna in higher education to respect and encourage variance and diversity of belief, thought, and culture. i call on your professionalism during these times and so appreciate your passion and care for each other and our students. virginia commonwealth university to staff election message dear vcu and vcu health communities, yesterday, we elected new leaders for our city, commonwealth and nation. i am grateful to those of you who made your voice heard during the electoral process, including many of our students who voted for the first time. whether or not your preferred candidate won, you were a part of history and a part of the process that moves our democracy forward. thank you. i hope you will always continue to make your voice heard, both as voters and as well-educated leaders in our society. as with any election, some members of our community are enthusiastic about the winners, others are not.  for many, this election cycle was notably emotional and difficult. now is the time, then, to demonstrate the values that make virginia commonwealth university such a remarkable place.  we reaffirm our commitment to working together across boundaries of discipline or scholarship, as members of one intellectual community, to achieve what’s difficult.  we reaffirm our commitment to inclusion, to ensuring that every person who comes to vcu is respected and emboldened to succeed.  we reaffirm that we will always be a place of the highest integrity, accountability, and we will offer an unyielding commitment to serving those who need us. history changes with every election. what does not change are the commitments we share as one community that is relentlessly focused on advancing the human experience for all people. you continue to inspire me.  and i know you will continue to be a bright light for richmond, virginia, our nation and our world. virginia commonwealth university school of education to students election message dear students, on tuesday we elected new leaders for our city, our commonwealth and our nation. although leadership will be changing, i echo dr. rao’s message below in that our mission outlined by the quest for distinction to support student success, advance knowledge and strengthen our communities remains steadfast. at the vcu school of education, we work to create safe spaces where innovation, inclusion and collaboration can thrive. we actively work across boundaries and disciplines to address the complex challenges facing our communities, schools and families. the election of new leaders provides new opportunities for our students, faculty and staff to build bridges that help us reach our goal of making an impact in urban and high need environments. i encourage you to engage in positive dialogues with one another as the city, commonwealth and nation adjust to the change in leadership, vision and strategy. virginia commonwealth university division of student affairs dear students, we are writing to you, collectively, as leaders in the division of student affairs.  we acknowledge that this election season was stressful for many individuals in our vcu community, culminating with the election of the next president.  some members of our campus community have felt disrespected, attacked and further marginalized by political rhetoric during the political process.  we want to affirm support of all of our students while also recognizing the unique experiences and concerns of individuals. we want all students to know that we are here to support you, encourage you and contribute to your success. we now live in a space of uncertainty as we transition leadership in our nation.  often, with this uncertainty comes a host of thoughts and feelings.  we hope that you will take advantage of some of the following services and programs we offer through our division to support your well-being, including: office of multicultural student affairs, self-care space, university counseling services , the wellness resource center, trans lives matter panel and survivor solidarity support, recreational sports, restorative yoga and mind & body classes. we encourage students to express their concerns and engage in conversations that further the core values articulated in quest, the vcu strategic plan. we continue to have an opportunity to make individual and collective choices about how we work to bridge differences in a manner that builds up our community. our staff will have a table each day next week on the vcu compass from noon to : p.m. ­­­to receive your concerns, suggestions and just listen.  please stop by to meet us.  we want you to know you have our full support. other organizations aclu joint statement from california legislative leaders on result of presidential election posted in: diversity, librarianship, library, management. tagged: college · communication · diversity · election · equity · higher ed · inclusion · library · university finding the right words in post-election libraries and higher ed nov th, by bohyun (library hat). comments are off for this post ** this post was originally published in acrl techconnect on nov. , .*** this year’s election result has presented a huge challenge to all of us who work in higher education and libraries. usually, libraries, universities, and colleges do not comment on presidential election result and we refrain from talking about politics at work. but these are not usual times that we are living in. a black female student was shoved off the sidewalk and called the ‘n’ word at baylor university. the ku klux klan is openly holding a rally. west virginia officials publicly made a racist comment about the first lady. steve bannon’s prospective appointment as the chief strategist and senior counsel to the new president is being praised by white nationalist leaders and fiercely opposed by civil rights groups at the same time. bannon is someone who calls for an ethno-state, openly calls martin luther king a fraud, and laments white dispossession and the deconstruction of occidental civilization. there are people drawing a swastika at a park. the ‘whites only’ and ‘colored’ signs were put up over water fountains in a florida school. a muslim student was threatened with a lighter. asian-american women are being assaulted. hostile acts targeting minority students are taking place on college campuses. libraries and educational institutions exist because we value knowledge and science. knowledge and science do not discriminate. they grow across all different races, ethnicities, religions, nationalities, sexual identities, and disabilities. libraries and educational institutions exist to enable and empower people to freely explore, investigate, and harness different ideas and thoughts. they support, serve, and belong to ‘all’ who seek knowledge. no matter how naive it may sound, they are essential to the betterment of human lives, and they do so by creating strength from all our differences, not likeness. this is why diversity, equity, inclusion are non-negotiable and irrevocable values in libraries and educational institutions. how do we reconcile these values with the president-elect who openly dismissed and expressed hostility towards them? his campaign made remarks and promises that can be interpreted as nothing but the most blatant expressions of racism, sexism, intolerance, bigotry, harassment, and violence. what will we do to address the concerns of our students, staff, and faculty about their physical safety on campus due to their differences in race, ethnicity, religion, nationality, gender, and sexual identity? how do we assure them that we will continue to uphold these values and support everyone regardless of what they look like, how they identify their gender, what their faiths are, what disabilities they may have, who they love, where they come from, what languages they speak, or where they live? how? we say it. explicitly. clearly. and repeatedly. if you think that your organization is already very much pro-diversity that there is no need to confirm or reaffirm diversity, you can’t be farther from the everyday life minorities experience. sometimes, saying isn’t much. but right now, saying it out loud can mean everything. if you support those who belong to minority groups but don’t say it out loud, how would they know it? right now, nothing is obvious other than there is a lot of hate and violence towards minorities. the entire week after the election, i agonized about what to say to my small team of it people whom i supervise at work. as a manager, i felt that it was my responsibility to address the anxiety and uncertainty that some of my staff – particularly those in minority groups – would be experiencing due to the election result. i also needed to ensure that whatever dialogue takes place regarding the differences of opinions between those who were pleased and those who were distressed with the election result, those dialogues remain civil and respectful. crafting an appropriate message was much more challenging than i anticipated. i felt very strongly about the need to re-affirm the unwavering support and commitment to diversity, equity, and inclusion particularly in relation to libraries and higher education, no matter how obvious it may seem. i also felt the need to establish (within the bounds of my limited authority) that we will continue to respect, value, and celebrate diversity in interacting with library users as well as other library and university staff members. employees are held to the standard expectations of their institutions, such as diversity, equity, inclusion, tolerance, civil dialogue, and no harassment or violence towards minorities, even if their private opinions conflict with them. at the same time, i wanted to strike a measured tone and neither scare nor upset anyone, whichever side they were on in the election. as a manager, i have to acknowledge that everyone is entitled to their private opinions as long as they do not harm others. i suspect that many of us – either a manager or not – want to say something similar about the election result. not so much about who was and should have been as about what we are going to do now in the face of these public incidences of anger, hatred, harassment, violence, and bigotry directed at minority groups, which are coming out at an alarming pace because it affects all of us, not just minorities. finding the right words, however, is difficult. you have to carefully consider your role, audience, and the message you want to convey. the official public statement from a university president is going to take a tone vastly different from an informal private message a supervisor sends out to a few members of his or her team. a library director’s message to library patrons assuring the continued service for all groups of users with no discrimination will likely to be quite different from the one she sends to her library staff to assuage their anxiety and fear. for such difficulty not to delay and stop us from what we have to and want to say to everyone we work with and care for, i am sharing the short message that i sent out to my team last friday, days after the election. (n.b. ‘cats’ stands for ‘computing and technology services’ and umb refers to ‘university of maryland, baltimore.’) this is a customized message to address my own team. i am sharing this as a potential template for you to craft your own message. i would like to see more messages that reaffirm diversity, equity, and inclusion as non-negotiable values, explicitly state that we will not step backwards, and make a commitment to continued unwavering support for them. dear cats, this year’s close and divisive election left a certain level of anxiety and uncertainty in many of us. i am sure that we will hear from president perman and the university leadership soon. in the meantime, i want to remind you of something i believe to be very important. we are all here – just as we have been all along – to provide the most excellent service to our users regardless of what they look like, what their faiths are, where they come from, what languages they speak, where they live, and who they love. a library is a powerful place where people transform themselves through learning, critical thinking, and reflection. a library’s doors have been kept open to anyone who wants to freely explore the world of ideas and pursue knowledge. libraries are here to empower people to create a better future. a library is a place for mutual education through respectful and open-minded dialogues. and, we, the library staff and faculty, make that happen. we get to make sure that people’s ethnicity, race, gender, disability, socio-economic backgrounds, political views, or religious beliefs do not become an obstacle to that pursuit. we have a truly awesome responsibility. and i don’t have to tell you how vital our role is as a cats member in our library’s fulfilling that responsibility. whichever side we stood on in this election, let’s not forget to treat each other with respect and dignity. let’s use this as an opportunity to renew our commitment to diversity, one of the umb’s core values. inclusive excellence is one of the themes of the umb - strategic plan. each and every one of us has a contribution to make because we are stronger for our differences. we have much work ahead of us! i am out today, but expect lots of donuts monday. have a great weekend, bohyun   monday, i brought in donuts of many different kinds and told everyone they were ‘diversity donuts.’ try it. i believe it was successful in easing some stress and tension that was palpable in my team after the election. photo from flickr: https://www.flickr.com/photos/vnysia/ before crafting your own message, i recommend re-reading your institution’s core values, mission and vision statements, and the most recent strategic plan. most universities, colleges, and libraries include diversity, equity, inclusion, or something equivalent to these somewhere. also review all public statements or internal messages that came from your institution that reaffirms diversity, equity, and inclusion. you can easily incorporate those into your own message. make sure to clearly state your (and your institution’s) continued commitment to and unwavering support for diversity and inclusion and explicitly oppose bigotry, intolerance, harassment, and acts of violence. encourage civil discourse and mutual respect. it is very important to reaffirm the values of diversity, equity, and inclusion ‘before’ listing any resources and help that employees or students may seek in case of harassment or assault. without the assurance from the institution that it indeed upholds those values and will firmly stand by them, those resources and help mean little. below i have also listed messages, notes, and statements sent out by library directors, managers, librarians, and university presidents that reaffirm the full support for and commitment to diversity, equity, and inclusion. i hope to see more of these come out. if you have already received or sent out such a message, i invite you to share in the comments. if you have not, i suggest doing so as soon as possible. send out a message if you are in a position where doing so is appropriate. don’t forget to ask for a message addressing those values if you have not received any from your organization. director chris bourg to the mit libraries staff https://chrisbourg.wordpress.com/ / / /care-for-one-another/ dean k. g. schneider to the sonoma state university library staff http://freerangelibrarian.com/ / / /pin-and-a-prayer/ librarian zoe fisher to other librarians https://quickaskzoe.com/ / / /but-i-know-that-there-will-be-libraries/ university of california statement on presidential election results https://www.universityofcalifornia.edu/press-room/university-california-statement-election university of nevada, reno http://www.unr.edu/president/communications/ - - -election university of michigan http://president.umich.edu/news-communications/letters-to-the-community/ -election-message/ university of rochester and rochester institute of technology http://wxxinews.org/post/ur-presidents-post-election-letter-strikes-sour-note-some duke university https://today.duke.edu/ / /statement-president-brodhead-following- -election clarke university http://www.clarke.edu/page.aspx?id= mit https://news.mit.edu/ /letter-mit-community-new-administration-washington- northwestern university https://news.northwestern.edu/stories/ / /president-schapiro-on-the-election-and-the-university/ “post-election statements and messages that reaffirm diversity” (a list of more post-election statements and messages that reaffirm diversity)   posted in: diversity, librarianship, library, management. tagged: diversity · election · equity · inclusion · message · post-election · statement · template · tolerance say it out loud – diversity, equity, and inclusion nov th, by bohyun (library hat). comments are off for this post i usually and mostly talk about technology. but technology is so far away from my thought right now. i don’t feel that i can afford to worry about internet surveillance or how to protect privacy at this moment. not that they are unimportant. such a worry is real and deserves our attention and investigation. but at a time like this when there are so many reports of public incidences of hatred, bigotry, harassment, and violence reported on university and college campuses, on streets, and in many neighborhoods coming in at an alarming pace, i don’t find myself reflecting on how we can use technology to deal with this problem. for the problem is so much bigger. there are people drawing a swastika at a park. the ‘whites only’ and ‘colored’ signs were put up over water fountains in a florida school. a muslim student was threatened with a lighter. asian-american women are being assaulted. hostile acts targeting minority students are taking place on college campuses. a black female student was shoved off the sidewalk and called the ‘n’ word at baylor university. newt gingrich called for a house committee for un-american activities. the ku klux klan is openly holding a rally. the list goes on and on. photo from http://www.wftv.com/news/local/investigation-underway-after- -racist-signs-posted-above-water-fountains-at-first-coast-high-school/ we are justified to be freaking out. i suspect this is a deal breaker to not just democrats, not just clinton supporters, but a whole lot more people. not everyone who voted for donald trump endorse the position that women, people of color, muslims, lgbt, and all other minority groups deserve and should be deprived of the basic human right to be not publicly threatened, harassed, and assaulted, i hope. i am sure that many who voted for donald trump do support diversity, equity, and inclusion as important and non-negotiable values. i believe that many who voted for donald trump do not want a society where some of their family, friends, colleagues, and neighbors have to live in constant fear for their physical safety at minimum. there are very many white people who absolutely condemn bigotry, threat, hatred, discrimination, harassment, and violence directed at minorities and give their unwavering support to diversity, equity, and inclusion. the problem is that i don’t hear it said loudly enough, clearly enough, publicly enough. i realized that we – myself included – do not say this enough. one of my fellow librarians, steve, wrote this on his facebook wall after the election. i am a year old white guy. … i go out into the world today and i’m trying to hold a look on my face that says i don’t hate you black people, hispanic people, gay people, muslim people. i mean you no harm. i don’t want to deport you or imprison you. you are my brothers and sisters. i want for you all of the benefits, the rights, the joys (such as they are) that are afforded to everybody else in our society. i don’t think this look on my face is effective. why should they trust me? you can never appear to be doing the right thing. it requires doing the right thing. of course, steve doesn’t want to harm me because i am not white, i know. i am % positive that he wouldn’t assault me because i am female. but by stating this publicly (i mean as far as his fb friends can see the post), he made a difference to me. steve is not republican. but i would feel so much better if people i know tell me the same thing whether they are democrat or republican. and i think it will make a huge difference to others when we all say this together. sometimes, saying isn’t much. but right now, saying it aloud can mean everything. if you support those who belong to minority groups but don’t say it out loud, how would they know it? because right now, nothing is obvious other than there is a lot of hate and violence towards minorities. at this point, which candidate you voted for doesn’t matter. what matters is whether you will condone open hatred and violence towards minorities and women, thereby making it acceptable in our society. there is a lot at stake here, and this goes way beyond party politics. publicly confirming our continued support for and unwavering commitment to diversity is a big deal. people who are being insulted, threatened, harassed, and assaulted need to hear it. and when we say this together loudly enough, clearly enough, explicitly enough, it will deafen the voice of hatred, bigotry, and intolerance and chase it away to the margins of our society again. so i think i am going to say this whenever i have a chance whether formally or informally whether it is in a written form or in a conversation. if you are a librarian, you should say this to your library users. if you are a teacher, you should say this to your students. if you run a business, you need to say this to your employees and customers. if you manage a team at work, tell your team. say this out loud to your coworkers, friends, family, neighbors, and everyone you interact with. “i support all minorities and stand for diversity, equity, and inclusion.” “i object to and will not condone the acts of harassment, violence, hatred, and threats directed at minorities.” “i will not discriminate anyone based upon their ethnicity, race, sexual orientation, disability, political views, socio-economic backgrounds, or religious beliefs.” we cannot allow diversity, equity, and inclusion to become minority opinions. and it is up to us to keep it mainstream and to make it prevail. say it aloud and act on it. in times like this, many of us look to institutions that we belong to, the organizations we work for, professionally participate in, or personally support. we expect them to reconfirm the very basic values of diversity, equity, and inclusion. since i work for a university, i have been looking up and reading statements from higher education institutions. so far, not a great number of universities have made public statements confirming their continued support for diversity. i am sure more are on the way. but i expected more of them would come out more promptly. this is unfortunate because many of them openly expressed their support for diversity and even include diversity in their values, mission, and goals. if your organization hasn’t already confirmed their support for these values and expressed their commitment to provide safety for all minorities, ask for it. you may even be in a position to actually craft and issue one. for those in need of right words to express your intention clearly, here are some good examples below. “the university of california is proud of being a diverse and welcoming place for students, faculty, and staff with a wide range of backgrounds, experiences and perspectives.  diversity is central to our mission.  we remain absolutely committed to supporting all members of our community and adhering to uc’s principles against intolerance.  as the principles make clear, the university ‘strives to foster an environment in which all are included’ and ‘all are given an equal opportunity to learn and explore.’  the university of california will continue to pursue and protect these principles now and in the future, and urges our students, faculty, staff, and all others associated with the university to do so as well.” –  university of california “our responsibility is to remain committed to education, discovery and intellectual honesty – and to diversity, equity and inclusion. we are at our best when we come together to engage respectfully across our ideological differences; to support all who feel marginalized, threatened or unwelcome; and to pursue knowledge and understanding, as we always have, as the students, faculty and staff of the university of michigan.” – university of michigan “northwestern is committed to being a welcoming and inclusive community for all, regardless of their beliefs, and i assure you that will not change.” – northwestern university “as a catholic university, clarke will not step away from its many efforts to heighten our awareness of the individuals and groups who are exclude and marginalized in so many ways and to take action for their protection and inclusion.  today, i call on us as a community to step up our efforts to promote understanding and inclusion and to reach out to those among us who are feeling further disenfranchised, fearful and confused as a result of the election.” – clarke university “as president, i need to represent all of rit, and i therefore do not express preferences for political candidates. i do feel it important, however, to represent and reinforce rit’s shared commitment to the value of inclusive diversity. i have heard from many in our community that the result of the recent election has raised concerns from those in our minority populations, those who come from immigrant families, those from countries outside of the u.s., those in our lgbtqia+ community, those who practice islam, and even those in our female population about whether they should be concerned for their safety and well-being as a result of the horrific discourse that accompanied the presidential election process and some of the specific views and proposals presented. at rit, we have treasured the diverse contributions of members of these groups to our campus community, and i want to reassure all that one of rit’s highest priorities is to demonstrate the extraordinary value of inclusive diversity and that we will continue to respect, appreciate, and benefit from the contributions of all. anyone who feels unsafe here should make their feelings known to me and to others in a position to address their concerns. concerned members of our community can also take advantage of opportunities to engage in open discourse about the election in the mosaic center and at tomorrow’s grey matter discussion.” – rochester institute of technology please go ahead and say these out loud to people around you if you mean them.  no matter how obvious and cheesy they sound, i assure you, they are not obvious and cheesy to those who are facing open threats, harassment, and violence. let’s boost the signal; let’s make it loud; let’s make it overwhelming. “i support all minorities and stand for diversity, equity, and inclusion.” “i object to and will not condone the acts of harassment, violence, hatred, and threats directed at minorities.” “i will not discriminate anyone based upon their ethnicity, race, sexual orientation, disability, political views, socio-economic backgrounds, or religious beliefs.”   posted in: diversity. tagged: · election · hate crime · racism cybersecurity, usability, online privacy, and digital surveillance may th, by bohyun (library hat). comments are off for this post ** this post was originally published in acrl techconnect on may. , .*** cybersecurity is an interesting and important topic, one closely connected to those of online privacy and digital surveillance. many of us know that it is difficult to keep things private on the internet. the internet was invented to share things with others quickly, and it excels at that job. businesses that process transactions with customers and store the information online are responsible for keeping that information private. no one wants social security numbers, credit card information, medical history, or personal e-mails shared with the world. we expect and trust banks, online stores, and our doctor’s offices to keep our information safe and secure. however, keeping private information safe and secure is a challenging task. we have all heard of security breaches at j.p morgan, target, sony, anthem blue cross and blue shield, the office of personnel management of the u.s. federal government, university of maryland at college park, and indiana university. sometimes, a data breach takes place when an institution fails to patch a hole in its network systems. sometimes, people fall for a phishing scam, or a virus in a user’s computer infects the target system. other times, online companies compile customer data into personal profiles. the profiles are then sold to data brokers and on into the hands of malicious hackers and criminals. image from flickr – https://www.flickr.com/photos/topgold/ cybersecurity vs. usability to prevent such a data breach, institutional it staff are trained to protect their systems against vulnerabilities and intrusion attempts. employees and end users are educated to be careful about dealing with institutional or customers’ data. there are systematic measures that organizations can implement such as two-factor authentication, stringent password requirements, and locking accounts after a certain number of failed login attempts. while these measures strengthen an institution’s defense against cyberattacks, they may negatively affect the usability of the system, lowering users’ productivity. as a simple example, security measures like a captcha can cause an accessibility issue for people with disabilities. or imagine that a university it office concerned about the data security of cloud services starts requiring all faculty, students, and staff to only use cloud services that are soc type ii certified as an another example. soc stands for “service organization controls.” it consists of a series of standards that measure how well a given service organization keeps its information secure. for a business to be soc certified, it must demonstrate that it has sufficient policies and strategies that will satisfactorily protect its clients’ data in five areas known as “trust services principles.” those include the security of the service provider’s system, the processing integrity of this system, the availability of the system, the privacy of personal information that the service provider collects, retains, uses, discloses, and disposes of for its clients, and the confidentiality of the information that the service provider’s system processes or maintains for the clients. the soc type ii certification means that the business had maintained relevant security policies and procedures over a period of at least six months, and therefore it is a good indicator that the business will keep the clients’ sensitive data secure. the dropbox for business is soc certified, but it costs money. the free version is not as secure, but many faculty, students, and staff in academia use it frequently for collaboration. if a university it office simply bans people from using the free version of dropbox without offering an alternative that is as easy to use as dropbox, people will undoubtedly suffer. some of you may know that the usps website does not provide a way to reset the password for users who forgot their usernames. they are instead asked to create a new account. if they remember the account username but enter the wrong answers to the two security questions more than twice, the system also automatically locks their accounts for a certain period of time. again, users have to create a new account. clearly, the system that does not allow the password reset for those forgetful users is more secure than the one that does. however, in reality, this security measure creates a huge usability issue because average users do forget their passwords and the answers to the security questions that they set up themselves. it’s not hard to guess how frustrated people will be when they realize that they entered a wrong mailing address for mail forwarding and are now unable to get back into the system to correct because they cannot remember their passwords nor the answers to their security questions. to give an example related to libraries, a library may decide to block all international traffic to their licensed e-resources to prevent foreign hackers who have gotten hold of the username and password of a legitimate user from accessing those e-resources. this would certainly help libraries to avoid a potential breach of licensing terms in advance and spare them from having to shut down compromised user accounts one by one whenever those are found. however, this would make it impossible for legitimate users traveling outside of the country to access those e-resources as well, which many users would find it unacceptable. furthermore, malicious hackers would probably just use a proxy to make their ip address appear to be located in the u.s. anyway. what would users do if their organization requires them to reset passwords on a weekly basis for their work computers and several or more systems that they also use constantly for work? while this may strengthen the security of those systems, it’s easy to see that it will be a nightmare having to reset all those passwords every week and keeping track of them not to forget or mix them up. most likely, they will start using less complicated passwords or even begin to adopt just one password for all different services. some may even stick to the same password every time the system requires them to reset it unless the system automatically detects the previous password and prevents the users from continuing to use the same one. ill-thought-out cybersecurity measures can easily backfire. security is important, but users also want to be able to do their job without being bogged down by unwieldy cybersecurity measures. the more user-friendly and the simpler the cybersecurity guidelines are to follow, the more users will observe them, thereby making a network more secure. users who face cumbersome and complicated security measures may ignore or try to bypass them, increasing security risks. image from flickr – https://www.flickr.com/photos/topgold/ cybersecurity vs. privacy usability and productivity may be a small issue, however, compared to the risk of mass surveillance resulting from aggressive security measures. in , the guardian reported that the communication records of millions of people were being collected by the national security agency (nsa) in bulk, regardless of suspicion of wrongdoing. a secret court order prohibited verizon from disclosing the nsa’s information request. after a cyberattack against the university of california at los angeles, the university of california system installed a device that is capable of capturing, analyzing, and storing all network traffic to and from the campus for over days. this security monitoring was implemented secretly without consulting or notifying the faculty and those who would be subject to the monitoring. the san francisco chronicle reported the it staff who installed the system were given strict instructions not to reveal it was taking place. selected committee members on the campus were told to keep this information to themselves. the invasion of privacy and the lack of transparency in these network monitoring programs has caused great controversy. such wide and indiscriminate monitoring programs must have a very good justification and offer clear answers to vital questions such as what exactly will be collected, who will have access to the collected information, when and how the information will be used, what controls will be put in place to prevent the information from being used for unrelated purposes, and how the information will be disposed of. we have recently seen another case in which security concerns conflicted with people’s right to privacy. in february , the fbi requested apple to create a backdoor application that will bypass the current security measure in place in its ios. this was because the fbi wanted to unlock an iphone c recovered from one of the shooters in san bernadino shooting incident. apple ios secures users’ devices by permanently erasing all data when a wrong password is entered more than ten times if people choose to activate this option in the ios setting. the fbi’s request was met with strong opposition from apple and others. such a backdoor application can easily be exploited for illegal purposes by black hat hackers, for unjustified privacy infringement by other capable parties, and even for dictatorship by governments. apple refused to comply with the request, and the court hearing was to take place in march . the fbi, however, withdrew the request saying that it found a way to hack into the phone in question without apple’s help. now, apple has to figure out what the vulnerability in their ios if it wants its encryption mechanism to be foolproof. in the meanwhile, ios users know that their data is no longer as secure as they once thought. around the same time, the senate’s draft bill titled as “compliance with court orders act of ,” proposed that people should be required to comply with any authorized court order for data and that if that data is “unintelligible” – meaning encrypted – then it must be decrypted for the court. this bill is problematic because it practically nullifies the efficacy of any end-to-end encryption, which we use everyday from our iphones to messaging services like whatsapp and signal. because security is essential to privacy, it is ironic that certain cybersecurity measures are used to greatly invade privacy rather than protect it. because we do not always fully understand how the technology actually works or how it can be exploited for both good and bad purposes, we need to be careful about giving blank permission to any party to access, collect, and use our private data without clear understanding, oversight, and consent. as we share more and more information online, cyberattacks will only increase, and organizations and the government will struggle even more to balance privacy concerns with security issues. why libraries should advocate for online privacy? the fact that people may no longer have privacy on the web should concern libraries. historically, libraries have been strong advocates of intellectual freedom striving to keep patron’s data safe and protected from the unwanted eyes of the authorities. as librarians, we believe in people’s right to read, think, and speak freely and privately as long as such an act itself does not pose harm to others. the library freedom project is an example that reflects this belief held strongly within the library community. it educates librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedom, and helped the kilton public library in lebanon, new hampshire, to become the first library to operate a tor exit relay, to provide anonymity for patrons while they browse the internet at the library. new technologies brought us the unprecedented convenience of collecting, storing, and sharing massive amount of sensitive data online. but the fact that such sensitive data can be easily exploited by falling into the wrong hands created also the unparalleled level of potential invasion of privacy. while the majority of librarians take a very strong stance in favor of intellectual freedom and against censorship, it is often hard to discern a correct stance on online privacy particularly when it is pitted against cybersecurity. some even argue that those who have nothing to hide do not need their privacy at all. however, privacy is not equivalent to hiding a wrongdoing. nor do people keep certain things secrets because those things are necessarily illegal or unethical. being watched / will drive any person crazy whether s/he is guilty of any wrongdoing or not. privacy allows us safe space to form our thoughts and consider our actions on our own without being subject to others’ eyes and judgments. even in the absence of actual massive surveillance, just the belief that one can be placed under surveillance at any moment is sufficient to trigger self-censorship and negatively affects one’s thoughts, ideas, creativity, imagination, choices, and actions, making people more conformist and compliant. this is further corroborated by the recent study from oxford university, which provides empirical evidence that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. privacy is an essential part of being human, not some trivial condition that we can do without in the face of a greater concern. that’s why many people under political dictatorship continue to choose death over life under mass surveillance and censorship in their fight for freedom and privacy. the electronic frontier foundation states that privacy means respect for individuals’ autonomy, anonymous speech, and the right to free association. we want to live as autonomous human beings free to speak our minds and think on our own. if part of a library’s mission is to contribute to helping people to become such autonomous human beings through learning and sharing knowledge with one another without having to worry about being observed and/or censored, libraries should advocate for people’s privacy both online and offline as well as in all forms of communication technologies and devices. posted in: library, technology, usability, user experience, web. tagged: data security · digital freedom · encryption · internet · password · soc · tor three recent talks of mine on ux, data visualization, and it management apr th, by bohyun (library hat). comments are off for this post i have been swamped at work and pretty quiet here in my blog. but i gave a few talks recently. so i wanted to share those at least. i presented about how to turn the traditional library it department and its operation that is usually behind the scene into a more patron-facing unit at the recent american library association midwinter meeting back in january. this program was organized by the lita heads of it interest group. in march, i gave a short lightning talk at the code lib conference about the data visualization project of library data at my library. i was also invited to speak at the usmai (university system of maryland and affiliated institutions) ux unconference and gave a talk about user experience, personas, and the idea of applying library personas to library strategic planning. here are those three presentation slides for those interested! strategically ux oriented with personas from bohyun kim visualizing library data from bohyun kim turning the it dept. outward from bohyun kim posted in: ala, library, presentation, technology, usability, user experience. tagged: code lib · data visualization · it · management · ux near us and libraries, robots have arrived oct th, by bohyun (library hat). comments are off for this post ** this post was originally published in acrl techconnect on oct.  , .*** the movie, robot and frank, describes the future in which the elderly have a robot as their companion and also as a helper. the robot monitors various activities that relate to both mental and physical health and helps frank with various house chores. but frank also enjoys the robot’s company and goes on to enlist the robot into his adventure of breaking into a local library to steal a book and a greater heist later on. people’s lives in the movie are not particularly futuristic other than a robot in them. and even a robot may not be so futuristic to us much longer either. as a matter of fact, as of june , there is now a commercially available humanoid robot that is close to performing some of the functions that the robot in the movie ‘frank and robot’ does. pepper robot, image from aldebaran, https://www.aldebaran.com/en/a-robots/who-is-pepper a japanese company, softbank robotics corp. released a humanoid robot named ‘pepper’ to the market back in june. the pepper robot is feet tall, pounds, speaks languages and is equipped with an array of cameras, touch sensors, accelerometer, and other sensors in his “endocrine-type multi-layer neural network,” according to the cnn report.  the pepper robot was priced at ¥ , ($ , ). the pepper owners are also responsible for an additional ¥ , ($ ) monthly data and insurance fee. while the pepper robot is not exactly cheap, it is surprisingly affordable for a robot. this means that the robot industry has now matured to the point where it can introduce a robot that the mass can afford. robots come in varying capabilities and forms. some robots are as simple as a programmable cube block that can be combined with one another to be built into a working unit. for example, cubelets from modular robotics are modular robots that are used for educational purposes. each cube performs one specific function, such as flash, battery, temperature, brightness, rotation, etc. and one can combine these blocks together to build a robot that performs a certain function. for example, you can build a lighthouse robot by combining a battery block, a light-sensor block, a rotator block, and a flash block.   a variety of cubelets available from the modular robotics website.   by contrast, there are advanced robots such as those in the form of an animal developed by a robotics company, boston dynamics. some robots look like a human although much smaller than the pepper robot. nao is a -cm tall humanoid robot that moves, recognizes, hears and talks to people that was launched in . nao robots are an interactive educational toy that helps students to learn programming in a fun and practical way. noticing their relevance to stem education, some libraries are making robots available to library patrons. westport public library provides robot training classes for its two nao robots. chicago public library lends a number of finch robots that patrons can program to see how they work. in celebration of the national robotics week back in april, san diego public library hosted their first robot day educating the public about how robots have impacted the society. san diego public library also started a weekly robotics club inviting anyone to join in to help build or learn how to build a robot for the library. haslet public library offers the robotics camp program for th to th graders who want to learn how to build with lego mindstorms ev kits. school librarians are also starting robotics clubs. the robotics club at new rochelle high school in new york is run by the school’s librarian, ryan paulsen. paulsen’s robotics club started with faculty, parent, and other schools’ help along with a grant from nasa and participated in a first robotics competition. organizations such as the robotics academy at carnegie mellon university provides educational outreach and resources. image from aldebaran website at https://www.aldebaran.com/en/humanoid-robot/nao-robot there are also libraries that offer coding workshops often with arduino or raspberry pi, which are inexpensive computer hardware. ames free library offers raspberry pi workshops. san diego public library runs a monthly arduino enthusiast meetup.  arduinos and raspberry pis can be used to build digital devices and objects that can sense and interact the physical world, which are close to a simple robot. we may see  more robotics programs at those libraries in the near future. robots can fulfill many other functions than being educational interactive toys, however. for example, robots can be very useful in healthcare. a robot can be a patient’s emotional companion just like the pepper. or it can provide an easy way to communicate for a patient and her/his caregiver with physicians and others. a robot can be used at a hospital to move and deliver medication and other items and function as a telemedicine assistant. it can also provide physical assistance for a patient or a nurse and even be use for children’s therapy. humanoid robots like pepper may also serve at a reception desk at companies. and it is not difficult to imagine them as sales clerks at stores. robots can be useful at schools and other educational settings as well. at a workplace, teleworkers can use robots to achieve more active presence. for example, universities and colleges can offer a similar telepresence robot to online students who want to virtually experience and utilize the campus facilities or to faculty who wish to offer the office hours or collaborate with colleagues while they are away from the office. as a matter of fact, the university of texas, arlington, libraries recently acquired several telepresence robots to lend to their faculty and students. not all robots do or will have the humanoid form as the pepper robot does. but as robots become more and more capable, we will surely get to see more robots in our daily lives. references alpeyev, pavel, and takashi amano. “robots at work: softbank aims to bring pepper to stores.” bloomberg business, june , . http://www.bloomberg.com/news/articles/ - - /robots-at-work-softbank-aims-to-bring-pepper-to-stores. “boston dynamics.” accessed september , . http://www.bostondynamics.com/. boyer, katie. “robotics clubs at the library.” public libraries online, june , . http://publiclibrariesonline.org/ / /robotics-clubs-at-the-library/. “finch robots land at cpl altgeld.” chicago public library, may , . https://www.chipublib.org/news/finch-robots-land-at-cpl/. mcnickle, michelle. “ medical robots that could change healthcare – informationweek.” informationweek, december , . http://www.informationweek.com/mobile/ -medical-robots-that-could-change-healthcare/d/d-id/ . singh, angad. “‘pepper’ the emotional robot, sells out within a minute.” cnn.com, june , . http://www.cnn.com/ / / /tech/pepper-robot-sold-out/. tran, uyen. “sdpl labs: arduino aplenty.” the library incubator project, april , . http://www.libraryasincubatorproject.org/?p= . “ut arlington library to begin offering programming robots for checkout.” university of texas arlington, march , . https://www.uta.edu/news/releases/ / /library-robots- .php. waldman, loretta. “coming soon to the library: humanoid robots.” wall street journal, september , , sec. new york. http://www.wsj.com/articles/coming-soon-to-the-library-humanoid-robots- . posted in: library, technology. tagged: education · libraries · robotics · robots · stem ← earlier posts subscribe to our feed via rss search about libraryhat is a blog written by bohyun kim, cto & associate professor at the university of rhode island libraries (bohyun.kim.ois [at] gmail [dot] com; @bohyunkim). most popular - libraries meet the second machine age - future? libraries? what now? – after the ala summit on the future of libraries - query a google spreadsheet like a database with google visualization api query language - enabling the research ‘flow’ and serendipity in today’s digital library environment - research librarianship in crisis: mediate when, where, and how? - why not grow coders from the inside of libraries? - do you feel inadequate? for hard-working overachievers - redesigning the item record summary view in a library catalog and a discovery interface - fear no longer regular expressions - using git with bitbucket: basic commands – pull, add, commit, push - aaron swartz and too-comfortable research libraries - common misconceptions about library job search: what i have learned from the other side of the table - applying game dynamics to library services - how to make your writing less terrible - netflix and libraries: you are what “your users” think you are, not what you think you are archives july  ( ) december  ( ) october  ( ) may  ( ) november  ( ) may  ( ) april  ( ) october  ( ) september  ( ) july  ( ) march  ( ) february  ( ) september  ( ) june  ( ) may  ( ) march  ( ) december  ( ) november  ( ) october  ( ) september  ( ) july  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) march  ( ) february  ( ) january  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) april  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) tags acrl ala api change codeyear coding communication conference continuing education design election emerging technologies equity inclusion interview with brand-new librarians it javascript job job search jquery kindle libcodeyear librarian libraries library library day in the life lis lita makerspace management mls mobile new librarians post-mls presentation programming publication technology tips tweet-up twitter usability ux web © library hat | powered by wordpress a wordpress theme by ravi varma エロセフレ作り日記 エロセフレ作り日記 セフレ募集なら出会い系サイト!おすすめな理由を教えます! twitter facebook はてブ pocket line コピー . . . . セフレ募集掲示板でエッチな女の子を探して、セックスを楽しみたいと考えている男性も多いのではないでしょうか。 この記事ではセフレ募集を出会い系サイトで行う方法や無料セフレ掲示板のデメリットについて詳しく解説していきます。 エロセフレがほしいと思っている方は、ぜひ最後まで読んでみてください。 目次 無料セフレ掲示板のデメリット(オススメしない理由)について セフレ募集は出会い系サイト! セフレ募集おすすめ理由 .本人の身分確認が明確にされている セフレ募集おすすめ理由 .体の関係を求めている女性が多い セフレ募集おすすめ理由 .好みの女性が探しやすい セフレ募集を出会い系サイトで成功させるコツ セフレ募集方法 .セフレになりたい女性は、ストレスが溜まり捲っています。 セフレ募集方法 .メールのやり取りの時間帯は要チェック セフレ募集方法 .セフレにしたい女性は、慌てて即日にセックスをする必要は無 無料セフレ掲示板のデメリット(オススメしない理由)について 無料のセフレ掲示板とは、危険な罠が一杯仕掛けられています。 特にこういったサイトをあまり利用されていない素人の方からすると、無料掲示板内の女性コメントには、かなりエロエロな誘い文句が記載され、ついつい相手のペースに乗って利用し続けるというケースが多々あるのでは思います。 無料のセフレ掲示板と言うのは、誰でも登録することが可能で、未成年である 歳未満の女性でも登録できます。 仮に未成年と肉体関係を持つと、あなたは犯罪者となります。 或いは、この未成年のバックにヤクザがおり、この事実に対することに対し恐喝されることだってあり得ます。 サラリーマンが未成年との関係を持ってしまいニュースになった事件がありますが、この手の掲示板を利用した際にそれは起こっています。 くれぐれも無料のセフレ掲示板は利用しないよう心がけましょう。 セフレ募集は出会い系サイト! 安全で、且つ本当にセフレと出会えるサイト探している男性に対し、今回は、出会い系サイトをお勧めしたいと思います。 その理由は以下を参照してください。 セフレ募集おすすめ理由 .本人の身分確認が明確にされている 冒頭で説明したとおり、誰でもそのサービスに登録できると言うのは怖い話です。 こういったリスクを回避しているサービスが出会い系サイトで、ここでは登録前に、電話確認認証や、身分書の提示が登録時に必要であるため、何処の馬の骨か判らない人物は登録することは出来ません。 つまり、無料セフレ掲示板に多く潜んでいる、サクラや、悪徳業者の虐待に役立つということです。 セフレ募集おすすめ理由 .体の関係を求めている女性が多い セフレを作りたいというのが今回の主旨であるからには、相手も割切りを求めている女性がたくさん登録されているサイトが理想です。 その理想を叶えたのが、出会い系サイトになります。 出会い系サイトの掲示板や、プロフを覗いていただくと判りますが、短時間だけ会って、セックスだけを楽しみたいと思っている女性がたくさんいます。 そういった願望を持つ女性の分母(登録数)が大きいのは、セフレを作りたいと思う男性からすると、かなりの確率でセフレをゲットできることが可能です。 セフレ募集おすすめ理由 .好みの女性が探しやすい サイトに登録し、女性検索からメールのやり取りをおこない、ようやく出会えたはいいが、考えていた自分の好みでは無くショックを受けてたら本末転倒であり、今までの苦労も水の泡になってしまいます。 出会い系サイトでは、機能面が優れており充実してます。 まず女性探しをやる前には、相手もプロフからでも、スリーサイズや、体のタイプ(グラマー、スリム、巨乳など)や身長は当たり前で、アダルトを希望する女性であれば、sやmなのか、あるいはアブノーマルなのか迄、相手の特徴を調べることが事前に把握出来ます。 出会い系サイトとは、ある程度の女性外観やセクシャルレベルまでを事前に把握しセフレ探しが出来るため、機能面も優れていると考えてください。 セフレ募集を出会い系サイトで成功させるコツ それでは出会い系サイトを利用して、セフレを作りましょう。 以下にアドバイス的内容をまとめてみました。 ご参照ください。 セフレ募集方法 .セフレになりたい女性は、ストレスが溜まり捲っています。 出会い系サイトで女性と出会うための一連動作は以下の通りです。 女性の検索⇒ファーストメールを女性に送る⇒デートに誘う⇒ホテルに誘いセックスをする これが一連の流れとなりますが、基本的に出会い系サイトに登録してくる女性と言うのは、様々な事情を持っており、ストレスが溜まって、それの捌け口として男性との出会いを求めてきます。 つまり女性の立場からすると、自分の愚痴や悩みを聞いて欲しいと考える女性ばかりだと考えること。 なので、上記一連内での男性の対応は、聞き手役、相談役に徹するということを終始理解しながら進めていくように気を付けて下さい。 あまり自分本位で話を進めていくと、一連のどこかで女性とのやり取りが途絶えてしまう可能性があるでしょう。 セフレ募集方法 .メールのやり取りの時間帯は要チェック 好みの女性にファーストメールを出したとします。 そこで次が大事になりますが、相手から返信があった時間帯を忘れないようにしてください。 恐らく、その時間帯が相手の女性からすると、落ち着いてメールを返せる時間帯と言えるはずです。 自分の都合のいい時間に送り続けるのは時間のロスにもありますし、相手側も返信できなかったり、あわてて返信しようとするため、やり取りの内容が薄くなってしまいますので気を付けて下さい。 セフレ募集方法 .セフレにしたい女性は、慌てて即日にセックスをする必要は無 ここでは多少私の経験値も踏まえますが、この女をセフレにしたいと考えた場合、出会った日になんとかホテルに誘ってセックスをしなければという使命感を男性の場合考えがちです。 ただこれは意外に、その後女性と出会えなくなる危険性が孕んでいるということも理解して下さい。 出会い系で出会う女性の場合、意外と慎重になる方もたくさんいます。 特にセフレは長期感のお付き合いが発生し、相手の女性も強引なセックスを求める男性だと判ると、少し引いてしまい、今後の関係について考え、悩んでくるからです。 ホテルに誘うタイミングは、本当に女性と通じ合った瞬間に実行するものです。 例えば、女性からのボディタッチが増えたり、また手をつないだり、腕を組んでも相手が違和感を感じていないほどのレベルに達してからの方が間違いありません。 シェアする twitter facebook はてブ pocket line コピー エロセフレ作り日記をフォローする エロセフレ作り日記 エロセフレ作り日記 最近の投稿 セフレ募集に向いてない女性!セフレ作りのときに注意すべきタイプとは? セフレ作りのためのプロフィールはどうすればいいの?コツを伝授します セフレが作れない!出会い系サイトで出会えない男性が見直すべきポイント セフレにできる狙い目女性は?出会い系サイトでアプローチすべき女性を紹介! アーカイブ 年 月 カテゴリー 未分類 エロセフレ作り日記 © エロセフレ作り日記. ホーム 検索 トップ サイドバー 最近の投稿 セフレ募集に向いてない女性!セフレ作りのときに注意すべきタイプとは? セフレ作りのためのプロフィールはどうすればいいの?コツを伝授します セフレが作れない!出会い系サイトで出会えない男性が見直すべきポイント セフレにできる狙い目女性は?出会い系サイトでアプローチすべき女性を紹介! アーカイブ 年 月 カテゴリー 未分類 タイトルとurlをコピーしました none none max planck vlib news max planck vlib news   mpg/sfx server maintenance, thursday june, - pm the mpg/sfx server will undergo scheduled maintenance due to a hardware upgrade. the downtime will start at pm. services are expected to be back after approximately one hour. we apologize for any inconvenience. mpg/sfx server maintenance, tuesday december, - pm the database of the mpg/sfx server will undergo scheduled maintenance. the downtime will start at pm. services are expected to be back after minutes. we apologize for any inconvenience. how to get elsevier articles after december , the max planck digital library has been mandated to discontinue their elsevier subscription when the current agreement expires on december , . read more about the background in the full press release. nevertheless, most journal articles published until that date will remain available, due to the rights stipulated in the mpg contracts to date. to &# ; continue reading how to get elsevier articles after december , &# ; aleph multipool-recherche: parallele suche in mpg-bibliothekskatalogen update, . . : die multipool-suche gibt es jetzt auch als webinterface. der multipool-expertenmodus im aleph katalogisierungs-client dient der schnellen recherche in mehreren datenbanken gleichzeitig. dabei können die datenbanken entweder direkt auf dem aleph-server liegen oder als externe ressourcen über das z . -protokoll angebunden sein. zus&# ;tzlich zu den lokalen bibliotheken ist der mpi bibliothekskatalog im gbv auf dem &# ; continue reading aleph multipool-recherche: parallele suche in mpg-bibliothekskatalogen &# ; goodbye vlib! shutdown after october , in the max planck virtual library (vlib) was launched, with the idea of making all information resources relevant for max planck users simultaneously searchable under a common user interface. since then, the vlib project partners from the max planck libraries, information retrieval services groups, the gwdg and the mpdl invested much time and effort &# ; continue reading goodbye vlib! shutdown after october , &# ; https only for mpg/sfx and mpg.ebooks as of next week, all http requests to the mpg/sfx link resolver will be redirected to a corresponding https request. the max planck society electronic book index is scheduled to be switched to https only access the week after, starting on november , . regular web browser use of the above services should not be &# ; continue reading https only for mpg/sfx and mpg.ebooks &# ; https enabled for mpg/sfx the mpg/sfx link resolver is now alternatively accessible via the https protocol. the secure base url of the productive mpg/sfx instance is: https://sfx.mpg.de/sfx_local. https support enables secure third-party sites to load or to embed content from mpg/sfx without causing mixed content errors. please feel free to update your applications or your links to the mpg/sfx &# ; continue reading https enabled for mpg/sfx &# ; citation trails in primo central index (pci) the may release brought an interesting functionality to the mpg/sfx server maintenance, wednesday april, - am the mpg/sfx server updates to a new database (mariadb) on wednesday morning. the downtime will begin at am and is scheduled to last until am. we apologize for any inconvenience. proquest illustrata databases discontinued last year, the information provider proquest decided to discontinue its &# ;illustrata technology&# ; and &# ;illustrata natural science&# ; databases. unfortunately, this represents a preliminary end to proquest&# ;s long-year investment into deep indexing content. in a corresponding support article proquest states that there &# ;[&# ;] will be no loss of full text and full text + graphics images because &# ; continue reading proquest illustrata databases discontinued &# ; slasher ghost, and other developments in proof of stake | ethereum foundation blog created with sketch. search categories r&d research & development devcon devcon org organizational esp ecosystem support eth.org ethereum.org sec security archive aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, may, apr, mar, feb, jan dec, nov, oct, sep, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec, nov, oct, sep, aug, jul, jun, may, apr, mar, feb, jan dec slasher ghost, and other developments in proof of stake posted by vitalik buterin on october , research & development special thanks to vlad zamfir and zack hess for ongoing research and discussions on proof-of-stake algorithms and their own input into slasher-like proposals one of the hardest problems in cryptocurrency development is that of devising effective consensus algorithms. certainly, relatively passable default options exist. at the very least it is possible to rely on a bitcoin-like proof of work algorithm based on either a randomly-generated circuit approach targeted for specialized-hardware resitance, or failing that simple sha , and our existing ghost optimizations allow for such an algorithm to provide block times of seconds. however, proof of work as a general category has many flaws that call into question its sustainability as an exclusive source of consensus; % attacks from altcoin miners, eventual asic dominance and high energy inefficiency are perhaps the most prominent. over the last few months we have become more and more convinced that some inclusion of proof of stake is a necessary component for long-term sustainability; however, actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex. the fact that ethereum includes a turing-complete contracting system complicates things further, as it makes certain kinds of collusion much easier without requiring trust, and creates a large pool of stake in the hands of decentralized entities that have the incentive to vote with the stake to collect rewards, but which are too stupid to tell good blockchains from bad. what the rest of this article will show is a set of strategies that deal with most of the issues surrounding proof of stake algorithms as they exist today, and a sketch of how to extend our current preferred proof-of-stake algorithm, slasher, into something much more robust. historical overview: proof of stake and slasher if you’re not yet well-versed in the nuances of proof of stake algorithms, first read: https://blog.ethereum.org/ / / /stake/ the fundamental problem that consensus protocols try to solve is that of creating a mechanism for growing a blockchain over time in a decentralized way that cannot easily be subverted by attackers. if a blockchain does not use a consensus protocol to regulate block creation, and simply allows anyone to add a block at any time, then an attacker or botnet with very many ip addresses could flood the network with blocks, and particularly they can use their power to perform double-spend attacks - sending a payment for a product, waiting for the payment to be confirmed in the blockchain, and then starting their own “fork” of the blockchain, substituting the payment that they made earlier with a payment to a different account controlled by themselves, and growing it longer than the original so everyone accepts this new blockchain without the payment as truth. the general solution to this problem involves making a block “hard” to create in some fashion. in the case of proof of work, each block requires computational effort to produce, and in the case of proof of stake it requires ownership of coins - in most cases, it’s a probabilistic process where block-making privileges are doled out randomly in proportion to coin holdings, and in more exotic “negative block reward” schemes anyone can create a block by spending a certain quantity of funds, and they are compensated via transaction fees. in any of these approaches, each chain has a “score” that roughly reflects the total difficulty of producing the chain, and the highest-scoring chain is taken to represent the “truth” at that particular time. for a detailed overview of some of the finer points of proof of stake, see the above-linked article; for those readers who are already aware of the issues i will start off by presenting a semi-formal specification for slasher: blocks are produced by miners; in order for a block to be valid it must satisfy a proof-of-work condition. however, this condition is relatively weak (eg. we can target the mining reward to something like . x the genesis supply every year) every block has a set of designated signers, which are chosen beforehand (see below). for a block with valid pow to be accepted as part of the chain it must be accompanied by signatures from at least two thirds of its designated signers. when block n is produced, we say that the set of potential signers of block n + is the set of addresses such that sha (address + block[n].hash) < block[n].balance(address) * d where d is a difficulty parameter targeting signers per block (ie. if block n has less than signers it goes down otherwise it goes up). note that the set of potential signers is very computationally intensive to fully enumerate, and we don't try to do so; instead we rely on signers to self-declare. if a potential signer for block n + wants to become a designated signer for that block, they must send a special transaction accepting this responsibility and that transaction must get included between blocks n + and n + . the set of designated signers for block n + is the set of all individuals that do this. this "signer must confirm" mechanism helps ensure that the majority of signers will actually be online when the time comes to sign. for blocks ... , the set of signers is empty, so proof of work alone suffices to create those blocks. when a designated signer adds their signature to block n + , they are scheduled to receive a reward in block n + . if a signer signs two different blocks at height n + , then if someone detects the double-signing before block n + they can submit an "evidence" transaction containing the two signatures, destroying the signer's reward and transferring a third of it to the whistleblower. if there is an insufficient number of signers to sign at a particular block height h, a miner can produce a block with height h+ directly on top of the block with height h- by mining at an x higher difficulty (to incentivize this, but still make it less attractive than trying to create a normal block, there is a x higher reward). skipping over two blocks has higher factors of x diff and x reward, three blocks x and x, etc. essentially, by explicitly punishing double-signing, slasher in a lot of ways, although not all, makes proof of stake act like a sort of simulated proof of work. an important incidental benefit of slasher is the non-revert property. in proof of work, sometimes after one node mines one block some other node will immediately mine two blocks, and so some nodes will need to revert back one block upon seeing the longer chain. here, every block requires two thirds of the signers to ratify it, and a signer cannot ratify two blocks at the same height without losing their gains in both chains, so assuming no malfeasance the blockchain will never revert. from the point of view of a decentralized application developer, this is a very desirable property as it means that “time” only moves in one direction, just like in a server-based environment. however, slasher is still vulnerable to one particular class of attack: long-range attacks. instead of trying to start a fork from ten blocks behind the current head, suppose that an attacker tries to start a fork starting from ten thousand blocks behind, or even the genesis block - all that matters is that the depth of the fork must be greater than the duration of the reward lockup. at that point, because users’ funds are unlocked and they can move them to a new address to escape punishment, users have no disincentive against signing on both chains. in fact, we may even expect to see a black market of people selling their old private keys, culminating with an attacker single-handedly acquiring access to the keys that controlled over % of the currency supply at some point in history. one approach to solving the long-range double-signing problem is transactions-as-proof-of-stake, an alternative pos solution that does not have an incentive to double-sign because it’s the transactions that vote, and there is no reward for sending a transaction (in fact there’s a cost, and the reward is outside the network); however, this does nothing to stop the black key market problem. to properly deal with that issue, we will need to relax a hidden assumption. subjective scoring and trust for all its faults, proof of work does have some elegant economic properties. particularly, because proof of work requires an externally rivalrous resource, something with exists and is consumed outside the blockchain, in order to generate blocks (namely, computational effort), launching a fork against a proof of work chain invariably requires having access to, and spending, a large quantity of economic resources. in the case of proof of stake, on the other hand, the only scarce value involved is value within the chain, and between multiple chains that value is not scarce at all. no matter what algorithm is used, in proof of stake % of the owners of the genesis block could eventually come together, collude, and produce a longer (ie. higher-scoring) chain than everyone else. this may seem like a fatal flaw, but in reality it is only a flaw if we implicitly accept an assumption that is made in the case of proof of work: that nodes have no knowledge of history. in a proof-of-work protocol, a new node, having no direct knowledge of past events and seeing nothing but the protocol source code and the set of messages that have already been published, can join the network at any point and determine the score of all possible chains, and from there the block that is at the top of the highest-scoring main chain. with proof of stake, as we described, such a property cannot be achieved, since it’s very cheap to acquire historical keys and simulate alternate histories. thus, we will relax our assumptions somewhat: we will say that we are only concerned with maintaining consensus between a static set of nodes that are online at least once every n days, allowing these nodes to use their own knowledge of history to reject obvious long-range forks using some formula, and new nodes or long-dormant nodes will need to specify a “checkpoint” (a hash of a block representing what the rest of the network agrees is a recent state) in order to get back onto the consensus. such an approach is essentially a hybrid between the pure and perhaps harsh trust-no-one logic of bitcoin and the total dependency on socially-driven consensus found in networks like ripple. in ripple’s case, users joining the system need to select a set of nodes that they trust (or, more precisely, trust not to collude) and rely on those nodes during every step of the consensus process. in the case of bitcoin, the theory is that no such trust is required and the protocol is completely self-contained; the system works just as well between a thousand isolated cavemen with laptops on a thousand islands as it does in a strongly connected society (in fact, it might work better with island cavemen, since without trust collusion is more difficult). in our hybrid scheme, users need only look to the society outside of the protocol exactly once - when they first download a client and find a checkpoint - and can enjoy bitcoin-like trust properties starting from that point. in order to determine which trust assumption is the better one to take, we ultimately need to ask a somewhat philosophical question: do we want our consensus protocols to exist as absolute cryptoeconomic constructs completely independent of the outside world, or are we okay with relying heavily on the fact that these systems exist in the context of a wider society? although it is indeed a central tenet of mainstream cryptocurrency philosophy that too much external dependence is dangerous, arguably the level of independence that bitcoin affords us in reality is no greater than that provided by the hybrid model. the argument is simple: even in the case of bitcoin, a user must also take a leap of trust upon joining the network - first by trusting that they are joining a protocol that contains assets that other people find valuable (eg. how does a user know that bitcoins are worth $ each and dogecoins only $ . ? especially with the different capabilities of asics for different algorithms, hashpower is only a very rough estimate), and second by trusting that they are downloading the correct software package. in both the supposedly “pure” model and the hybrid model there is always a need to look outside the protocol exactly once. thus, on the whole, the gain from accepting the extra trust requirement (namely, environmental friendliness and security against oligopolistic mining pools and asic farms) is arguably worth the cost. additionally, we may note that, unlike ripple consensus, the hybrid model is still compatible with the idea of blockchains “talking” to each each other by containing a minimal “light” implementation of each other’s protocols. the reason is that, while the scoring mechanism is not “absolute” from the point of view of a node without history suddenly looking at every block, it is perfectly sufficient from the point of view of an entity that remains online over a long period of time, and a blockchain certainly is such an entity. so far, there have been two major approaches that followed some kind of checkpoint-based trust model: developer-issued checkpoints - the client developer issues a new checkpoint with each client upgrade (eg. used in ppcoin) revert limit - nodes refuse to accept forks that revert more than n (eg. ) blocks (eg. used in tendermint) the first approach has been roundly criticized by the cryptocurrency community for being too centralized. the second, however, also has a flaw: a powerful attacker can not only revert a few thousand blocks, but also potentially split the network permanently. in the n-block revert case, the strategy is as follows. suppose that the network is currently at block , and n = . the attacker starts a secret fork, and grows it by blocks faster than the main network. when the main network gets to , and some node produces block , the attacker reveals his own fork. some nodes will see the main network’s block , and refuse to switch to the attacker’s fork, but the nodes that did not yet see that block will be happy to revert from to and then accept the attacker’s fork. from there, the network is permanently split. fortunately, one can actually construct a third approach that neatly solves this problem, which we will call exponentially subjective scoring. essentially, instead of rejecting forks that go back too far, we simply penalize them on a graduating scale. for every block, a node maintains a score and a “gravity” factor, which acts as a multiplier to the contribution that the block makes to the blockchain’s score. the gravity of the genesis block is , and normally the gravity of any other block is set to be equal to the gravity of its parent. however, if a node receives a block whose parent already has a chain of n descendants (ie. it’s a fork reverting n blocks), that block’s gravity is penalized by a factor of . n, and the penalty propagates forever down the chain and stacks multiplicatively with other penalties. that is, a fork which starts block ago will need to grow % faster than the main chain in order to overtake it, a fork which starts blocks ago will need to grow . times as quickly, and a fork which starts blocks ago will need to grow times as quickly - clearly an impossibility with even trivial proof of work. the algorithm serves to smooth out the role of checkpointing, assigning a small “weak checkpoint” role to each individual block. if an attacker produces a fork that some nodes hear about even three blocks earlier than others, those two chains will need to stay within % of each other forever in order for a network split to maintain itself. there are other solutions that could be used aside from, or even alongside ess; a particular set of strategies involves stakeholders voting on a checkpoint every few thousand blocks, requiring every checkpoint produced to reflect a large consensus of the majority of the current stake (the reason the majority of the stake can’t vote on every block is, of course, that having that many signatures would bloat the blockchain). slasher ghost the other large complexity in implementing proof of stake for ethereum specifically is the fact that the network includes a turing-complete financial system where accounts can have arbitrary permissions and even permissions that change over time. in a simple currency, proof of stake is relatively easy to accomplish because each unit of currency has an unambiguous owner outside the system, and that owner can be counted on to participate in the stake-voting process by signing a message with the private key that owns the coins. in ethereum, however, things are not quite so simple: if we do our job promoting proper wallet security right, the majority of ether is going to be stored in specialized storage contracts, and with turing-complete code there is no clear way of ascertaining or assigning an “owner”. one strategy that we looked at was delegation: requiring every address or contract to assign an address as a delegate to sign for them, and that delegate account would have to be controlled by a private key. however, there is a problem with any such approach. suppose that a majority of the ether in the system is actually stored in application contracts (as opposed to personal storage contracts); this includes deposits in schellingcoins and other stake-based protocols, security deposits in probabilistic enforcement systems, collateral for financial derivatives, funds owned by daos, etc. those contracts do not have an owner even in spirit; in that case, the fear is that the contract will default to a strategy of renting out stake-voting delegations to the highest bidder. because attackers are the only entities willing to bid more than the expected return from the delegation, this will make it very cheap for an attacker to acquire the signing rights to large quantities of stake. the only solution to this within the delegation paradigm is to make it extremely risky to dole out signing privileges to untrusted parties; the simplest approach is to modify slasher to require a large deposit, and slash the deposit as well as the reward in the event of double-signing. however, if we do this then we are essentially back to entrusting the fate of a large quantity of funds to a single private key, thereby defeating much of the point of ethereum in the first place. fortunately, there is one alternative to delegation that is somewhat more effective: letting contracts themselves sign. to see how this works, consider the following protocol: there is now a sign opcode added. a signature is a series of virtual transactions which, when sequentially applied to the state at the end of the parent block, results in the sign opcode being called. the nonce of the first vtx in the signature must be the prevhash being signed, the nonce of the second must be the prevhash plus one, and so forth (alternatively, we can make the nonces - , - , - etc. and require the prevhash to be passed in through transaction data so as to be eventually supplied as an input to the sign opcode). when the block is processed, the state transitions from the vtxs are reverted (this is what is meant by "virtual") but a deposit is subtracted from each signing contract and the contract is registered to receive the deposit and reward in blocks. basically, it is the contract’s job to determine the access policy for signing, and the contract does this by placing the sign opcode behind the appropriate set of conditional clauses. a signature now becomes a set of transactions which together satisfy this access policy. the incentive for contract developers to keep this policy secure, and not dole it out to anyone who asks, is that if it is not secure then someone can double-sign with it and destroy the signing deposit, taking a portion for themselves as per the slasher protocol. some contracts will still delegate, but this is unavoidable; even in proof-of-stake systems for plain currencies such as nxt, many users end up delegating (eg. dpos even goes so far as to institutionalize delegation), and at least here contracts have an incentive to delegate to an access policy that is not likely to come under the influence of a hostile entity - in fact, we may even see an equilibrium where contracts compete to deliver secure blockchain-based stake pools that are least likely to double-vote, thereby increasing security over time. however, the virtual-transactions-as-signatures paradigm does impose one complication: it is no longer trivial to provide an evidence transaction showing two signatures by the same signer at the same block height. because the result of a transaction execution depends on the starting state, in order to ascertain whether a given evidence transaction is valid one must prove everything up to the block in which the second signature was given. thus, one must essentially “include” the fork of a blockchain inside of the main chain. to do this efficiently, a relatively simple proposal is a sort of “slasher ghost” protocol, where one can include side-blocks in the main chain as uncles. specifically, we declare two new transaction types: [block_number, uncle_hash] - this transaction is valid if ( ) the block with the given uncle_hash has already been validated, ( ) the block with the given uncle_hash has the given block number, and ( ) the parent of that uncle is either in the main chain or was included earlier as an uncle. during the act of processing this transaction, if addresses that double-signed at that height are detected, they are appropriately penalized. [block_number, uncle_parent_hash, vtx] - this transaction is valid if ( ) the block with the given uncle_parent_hash has already been validated, ( ) the given virtual transaction is valid at the given block height with the state at the end of uncle_parent_hash, and ( ) the virtual transaction shows a signature by an address which also signed a block at the given block_number in the main chain. this transaction penalizes that one address. essentially, one can think of the mechanism as working like a “zipper”, with one block from the fork chain at a time being zipped into the main chain. note that for a fork to start, there must exist double-signers at every block; there is no situation where there is a double-signer blocks into a fork so a whistleblower must “zip” innocent blocks into a chain before getting to the target block - rather, in such a case, even if blocks need to be added, each one of them notifies the main chain about five separate malfeasors that double-signed at that height. one somewhat complicated property of the scheme is that the validity of these “slasher uncles” depends on whether or not the node has validated a particular block outside of the main chain; to facilitate this, we specify that a response to a “getblock” message in the wire protocol must include the uncle-dependencies for a block before the actual block. note that this may sometimes lead to a recursive expansion; however, the denial-of-service potential is limited since each individual block still requires a substantial quantity of proof-of-work to produce. blockmakers and overrides finally, there is a third complication. in the hybrid-proof-of-stake version of slasher, if a miner has an overwhelming share of the hashpower, then the miner can produce multiple versions of each block, and send different versions to different parts of the network. half the signers will see and sign one block, half will see and sign another block, and the network will be stuck with two blocks with insufficient signatures, and no signer willing to slash themselves to complete the process; thus, a proof-of-work override will be required, a dangerous situation since the miner controls most of the proof-of-work. there are two possible solutions here: signers should wait a few seconds after receiving a block before signing, and only sign stochastically in some fashion that ensures that a random one of the blocks will dominate. there should be a single "blockmaker" among the signers whose signature is required for a block to be valid. effectively, this transfers the "leadership" role from a miner to a stakeholder, eliminating the problem, but at the cost of adding a dependency on a single party that now has the ability to substantially inconvenience everyone by not signing, or unintentionally by being the target of a denial-of-service attack. such behavior can be disincentivized by having the signer lose part of their deposit if they do not sign, but even still this will result in a rather jumpy block time if the only way to get around an absent blockmaker is using a proof-of-work override. one possible solution to the problem in ( ) is to remove proof of work entirely (or almost entirely, keeping a minimal amount for anti-ddos value), replacing it with a mechanism that vlad zamfir has coined “delegated timestamping”. essentially, every block must appear on schedule (eg. at second intervals), and when a block appears the signers vote if the block was on time, or if the block was too early or too late. if the majority of the signers votes , then the block is treated as invalid - kept in the chain in order to give the signers their fair reward, but the blockmaker gets no reward and the state transition gets skipped over. voting is incentivized via schellingcoin - the signers whose vote agrees with the majority get an extra reward, so assuming that everyone else is going to be honest everyone has the incentive to be honest, in a self-reinforcing equilibrium. the theory is that a -second block time is too fast for signers to coordinate on a false vote (the astute reader may note that the signers were decided blocks in advance so this is not really true; to fix this we can create two groups of signers, one pre-chosen group for validation and another group chosen at block creation time for timestamp voting). putting it all together taken together, we can thus see something like the following working as a functional version of slasher: every block has a designated blockmaker, a set of designated signers, and a set of designated timestampers. for a block to be accepted as part of the chain it must be accompanied by virtual-transactions-as-signatures from the blockmaker, two thirds of the signers and timestampers, and the block must have some minimal proof of work for anti-ddos reasons (say, targeted to . x per year) during block n, we say that the set of potential signers of block n + is the set of addresses such that sha (address + block[n].hash) < block[n].balance(address) * d where d is a difficulty parameter targeting signers per block (ie. if block n has less than signers it goes down otherwise it goes up). if a potential signer for block n + wants to become a signer, they must send a special transaction accepting this responsibility and supplying a deposit, and that transaction must get included between blocks n + and n + . the set of designated signers for block n + is the set of all individuals that do this, and the blockmaker is the designated signer with the lowest value for sha (address + block[n].hash). if the signer set is empty, no block at that height can be made. for blocks ... , the blockmaker and only signer is the protocol developer. the set of timestampers of the block n + is the set of addresses such that sha (address + block[n].hash) < block[n].balance(address) * d , where d is targeted such that there is an average of timestampers each block (ie. if block n has less than timestampers it goes down otherwise it goes up). let t be the timestamp of the genesis block. when block n + is released, timestampers can supply virtual-transactions-as-signatures for that block, and have the choice of voting or on the block. voting means that they saw the block within . seconds of time t + (n + ) * , and voting means that they received the block when the time was outside that range. note that nodes should detect if their clocks are out of sync with everyone else's clocks on the blockchain, and if so adjust their system clocks. timestampers who voted along with the majority receive a reward, other timestampers get nothing. the designated signers for block n + have the ability to sign that block by supplying a set of virtual-transactions-as-a-signature. all designated signers who sign are scheduled to receive a reward and their returned deposit in block n + . signers who skipped out are scheduled to receive their returned deposit minus twice the reward (this means that it's only economically profitable to sign up as a signer if you actually think there is a chance greater than / that you will be online). if the majority timestamper vote is , the blockmaker is scheduled to receive a reward and their returned deposit in block n + . if the majority timestamper vote is , the blockmaker is scheduled to receive their deposit minus twice the reward, and the block is ignored (ie. the block is in the chain, but it does not contribute to the chain's score, and the state of the next block starts from the end state of the block before the rejected block). if a signer signs two different blocks at height n + , then if someone detects the double-signing before block n + they can submit an "evidence" transaction containing the two signatures to either or both chains, destroying the signer's reward and deposit and transferring a third of it to the whistleblower. if there is an insufficient number of signers to sign or the blockmaker is missing at a particular block height h, the designated blockmaker for height h + can produce a block directly on top of the block at height h - after waiting for seconds instead of . after years of research, one thing has become clear: proof of stake is non-trivial - so non-trivial that some even consider it impossible. the issues of nothing-at-stake and long-range attacks, and the lack of mining as a rate-limiting device, require a number of compensatory mechanisms, and even the protocol above does not address the issue of how to randomly select signers. with a substantial proof of work reward, the problem is limited, as block hashes can be a source of randomness and we can mathematically show that the gain from holding back block hashes until a miner finds a hash that favorably selects future signers is usually less than the gain from publishing the block hashes. without such a reward, however, other sources of randomness such as low-influence functions need to be used. for ethereum . , we consider it highly desirable to both not excessively delay the release and not try too many untested features at once; hence, we will likely stick with asic-resistant proof of work, perhaps with non-slasher proof of activity as an addon, and look at moving to a more comprehensive proof of stake model over time. previous post next post rss email me facebook github twitter ethereum foundation • ethereum.org • esp • bug bounty program • do-not-track • archive categories: research & development • devcon • organizational • ecosystem support • ethereum.org • security original theme by beautiful-jekyll, modified by the ethereum foundation team. dshr's blog: proofs of space dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, march , proofs of space bram cohen, the creator of bittorrent, gave an ee talk entitled stopping grinding attacks in proofs of space. two aspects were really interesting: a detailed critique of both the proof of work system used by most cryptocurrencies and blockchains, and schemes such as proof of stake that have been proposed to replace it. an alternate scheme for securing blockchains based on combining proof of space with verifiable delay functions. but there was another aspect that concerned me. follow me below the fold for details. i'll first outline cohen's critiques of proof of work, proof of stake, and proofs of other things, then summarize his proposed scheme combining proof of space with verifiable delay functions (posp/vdf), and conclude by addressing the aspects of his talk that concerned me. i have tried to refer to quotes, concepts and slides by the time at which they appear in the video thus [mm:ss] but these times are approximate. i apologize if the quotes are mis-transcribed from the audio. proof of work the goal of posp/vdf is to reduce greatly the cost of verifying blocks in a blockchain. but "zero cost brings grinding attacks", in which an attacker tries lots of possibilities to find the best one. "bitcoin fixes this by making grinding the expected behavior rather than an attack" using proof of work, in which peers try to invert a hash function. the computational cost of the vast number of hashes needed is the cause of bitcoin's massive energy consumption. cost of rewriting attack cohen points out that proof of work is "also susceptible to rewriting attacks". bitcoin core describes rewriting attacks thus: powerful miners have the ability to rewrite the block chain and replace their own transactions, allowing them to take back previous payments. the cost of this attack depends on the percentage of total network hash rate the attacking miner controls. the more centralized mining becomes, the less expensive the attack for a powerful miner. they provide a real example: in september , someone used centralized mining pool ghash.io to steal an estimated , bitcoins (worth $ , usd) from the gambling site betcoin. the attacker would spend bitcoins to make a bet. if he won, he would confirm the transaction. if he lost, he would create a transaction returning the bitcoins to himself and confirm that, invalidating the transaction that lost the bet. by doing so, he gained bitcoins from his winning bets without losing bitcoins on his losing bets. although this attack was performed on unconfirmed transactions, the attacker had enough hash rate (about %) to have profited from attacking transactions with one, two, or even more confirmations. more details of this attack are here. proof of stake proof of stake is an alternative consensus system for cryptocurrencies in which holders put some or all of their holdings "at stake", a process known as "bonding". blocks of transactions are validated by a quorum of the currency "at stake". misbehavior by holders is deterred by "slashing", miscreants lose their stake. it appears attractive, in that it can vastly reduce the energy demand of proof of work systems such as bitcoin's. it is especially attractive to core team members and early adopters of a cryptocurrency, since they will be large holders and will thus be able to control the currency. cohen's critique of proof of stake starts around [ : ] and covers six main points: its threat model is weaker than proof of work. just as proof of work is in practice centralized around large mining pools, proof of stake is centralized around large currency holdings (which were probably acquired much more cheaply than large mining installations). the choice of a quorum size is problematic. "too small and it's attackable. too large and nothing happens." and "unfortunately, those values are likely to be on the wrong side of each other in practice." incentivizing peers to put their holdings at stake creates a class of attacks in which peers "exaggerate one's own bonding and blocking it from others." slashing introduces a class of attacks in which peers cause others to be fraudently slashed. the incentives need to be strong enough to overcome the risks of slashing, and of keeping their signing keys accessible and thus at risk of compromise. "defending against those attacks can lead to situations where the system gets wedged because a split happened and nobody wants to take one for the team" proofs of other things cohen's critique of other types of proof starts around [ : ]. the main points are: peers need to be able to audit proofs locally, which isn't true of proofs of participation, importance or more nodes. the system needs to adjust the "difficulty" of proofs, which requires an exponential distribution of "difficulty", which these other types typically lack. in particular, at [ : ] he critiques "proof of storage of user-supplied data" as used by services such as maidsafe. an attacker doesn't have to store the amount of data they requested: "because you can just take a key and use it to ... generate completely fake data which everyone else has to store the data to claim their rewards later and you yourself don't have to do that, you just store the seed and there's no way to detect that because they are all supposed to be encrypted files that just look like garbage anyway". proof of space and verifiable delay functions cohen's explanation of posp/vdf starts at [ : ]. his summary is: bring in verifiable delay functions (vdfs) and alternate between proofs of space and time split between the 'trunk' of the blockchain which challenges come from and the 'foliage' which contains transactions attach public keys to proofs of space so there's no choice after one wins use canonical proofs of space, verifiable delay functions, and signatures all we have to do is invent a new proof of space algorithm, verifiable delay algorithm, and method of combining them my understanding of posp/vdf may be deficient. i have read beyond hellman’s time-memory trade-offs with applications to proofs of space, which explains the posp technique, but the mathematics exceed my capabilities. the vdf technique doesn't appear to have been published yet. as i understand it, the proof of space technique in essence works by having the prover fill storage space with an array of pseudo-random points in [ , ] via a time-consuming process. the verifier can then pose to the prover a question that can be answered either by a single storage access (fast) or by repeating the process of filling the storage (slow). by observing the time the prover takes the verifier can distinguish these two cases, and thus be assured that the prover has stored the (otherwise useless) data. as i understand it, verifiable delay functions work by forcing the prover to perform a specified number of iterations to generate a value that the verifier can quickly show is valid. cohen describes how these two techniques work together in a blockchain at about [ : ]: to use in a blockchain, each block is a proof of space followed by a proof of time which finalizes it. to find a proof of space, take the hash of the last proof of time, put it on a point in [ , ], find the closest proof of space you can to that. to find the number of iterations of the proof of time, multiply the difference between those two positions by the current work difficulty factor and round up to the next integer. the result of this is that the best proof of space will finish first, with the distribution of arrival times of finalizations the same as happens in a proof of work system if resources are fixed over time. the only discretion left on the part of farmers is whether to withhold their winning proofs of space. in other words, the winning verification of a block will be the one from the peer whose plot contains the closest point, because that distance controls how long it will take for the peer to return its verification via the verifiable delay function. the more storage a peer devotes to its plot, and thus the shorter the average distance between points, the more likely it is to be the winner because its proof of space will suffer the shortest delay. just as with bitcoin's proof of work, farming pools would arise in posp/vdf to smooth out income. the pool would divide up [ , ] among its members. concerns one aspect of the talk that concerned me was that cohen didn't seem well-informed about the landscape of storage. here are a few quotes: media shipments exabytes revenue $/gb flash $ . b $ . hard disk $ . b $ . lto tape $ . b $ . [ : ] "people spend like $ b/year on storage media". robert fontana and gary decad of ibm report that in the total revenue of storage media vendors (excluding optical) was $ b. but cohen is only interested in hard disk, which they report had revenue of $ . b. cohen's first slide claims "storage is an over $ billion a year industry with about % utilization". the vast majority of hard disks are now purchased by cloud companies. the idea that, for example, amazon web services has purchased twice as much hard disk as it needs is ludicrous. [ : ] "you have this thing where mass storage medium you can set a bit and leave it there until the end of time and its not costing you any more power. dram is costing you power when its just sitting there doing nothing". a state-of-the-art disk drive, such as seagate's tb barracuda pro, consumes consumes about w spun-down in standby mode, about w spun-up idle and about w doing random k reads. clearly, posp/vdf takes energy, just a lot less energy than proof of work. cohen might argue that, since posp/vdf expects to use the empty % of drives that store actual user data, the energy cost is zero. but these drives are not just part empty, they are in standby much of the time too. using them for proof of space means they are active somewhat more of the time, because they have to wake up from standby at least once every block time. they are thus consuming energy that the owner has to pay for. also, the drives are typically warranted not "until the end of time" but only for years. there is another economic impact. the consumer drives i believe he is thinking about are intended for consumer workloads, and are thus designed to be idle much of the time. they are specified with a "rated workload". the tb barracuda pro specifies: maximum rate of tb/year. workloads exceeding the annualized rate may degrade the drive mtbf and impact product reliability. the annualized workload rate is in units of tb per year, or tb per power on hours. thus the drive is specified to average no more than about gb/hr over its lifetime. the drive has a maximum sustained transfer rate of . gb/s or gb/hr. so the drive is designed for a duty cycle of about %. the proof of space workload would increase the drives' duty cycle somewhat, since it requires a few disk accesses per block time. it would thus wear the drives out somewhat faster, imposing further costs on the storage owner. write endurance vs. cell size [ : ] "my understanding is that the overwriting limits on flash has gotten better over time". as flash has added more bits per cell, the write endurance has decreased, as shown in the table. enterprise flash obscures this unfortunate fact by over-provisioning cells, spreading the writes across more cells, but this is costly. another aspect of the talk that concerned me was that, as i understand it, cohen's vision is of a posp/vdf network comprising large numbers of desktop pcs, continuously connected and powered up, each with one, or at most a few, half-empty hard drives. the drives would have been purchased at retail a few years ago. the desktop pc is a declining market, and the devices such as laptops, smartphones, tablets and small form-factor pcs that are displacing it use flash storage, are intermittently connected, and asleep whenever possible. thus proof of space addresses a declining market. the edge of the internet has changed since cohen invented bittorrent. source these changes have affected the market for hard disk. flash is increasingly destroying the markets for enterprise drives, and for . " laptop drives. one the other hand, "the cloud" has increased demand for "nearline" and consumer . " drives. they are purchased in bulk at prices far lower than retail by customers such as amazon, who cannot afford to have % of their investment in storage media sitting empty. nevertheless, the small proportion of their total inventory that is empty on average gives them much more empty storage than any retail customer. cohen's first slide claims "as long as rewards are below depreciation value it will be unprofitable to buy storage just for farming" suppose a network matching cohen's vision existed, how would economies of scale affect it? the cloud companies' empty drives are brand new, purchased in bulk direct from the manufacturer at a huge discount. that is three reasons why they would have much lower cost per byte than the drives in the network nodes. if the cloud companies chose to burn-in their new drives by using them for proof of space they would easily dominate the network at almost zero cost. once again economies of scale in peer-to-peer networks was prophetic. ps - i have not had time to analyze in detail the filecoin proposal, which has some similarities to posp/vdf. it has many significant differences, in that it stores user data rather than verifies transactions, uses a combined proof of spacetime rather than separate proofs, and employs bonding and slashing. it also appears to depend upon lightning-style off-chain payment channels. posted by david. at : am labels: bitcoin, storage media comments: david. said... david gerard's ethereum casper proof-of-stake only has to work well enough: worse is better in action reports: "proof-of-stake is a bit too obviously “thems what has, gets” — so you have to convince the users to go along with it. it’s also as naturally centralising as proof-of-work, if not more so. ... vitalik buterin has been talking about proof-of-stake since . ethereum was intended to be proof-of-stake in the white paper, but buterin noted in that proof-of-stake would be nontrivial — so ethereum went for proof-of-work instead, while they worked on the problem. ... ethereum casper, the project to move ethereum to proof-of-stake, started in . it’s been six months away for four years now. the proof of stake faq is a list of approaches that haven’t quite worked well enough. casper has had numerous technical and security issues. the current version only adds a bit of proof-of-stake to the existing proof-of-work system. but casper . has just been released — and buterin is talking about taking it live." may , at : pm david. said... more from chia network, the company building on bram's ideas, in the asic resistance of proof of space. august , at : pm david. said... back in ron rivest posed a cryptographic puzzle based on a verifiable delay function (vdf). estimates at the time were that it would take until the s to solve it. but, as katyanna quach reports in self-taught belgian bloke cracks crypto conundrum that was supposed to be uncrackable until : "fabrot wrote the equation in a few lines of c code and called upon the gnu multiple precision arithmetic library, a free mathematical software library to run the computation over trillion times. he used a bog standard pc with an intel core i - processor, and took about three and a half years to finally complete over trillion calculations." april , at : am david. said... bram choen isn't the only skeptic about proof of space; messari ceo: ethereum . proof-of-stake transition not to happen until at least by helen partz reports on discussions at the ethereal summit. may , at : am david. said... as i predicted three years ago, once the chia network got started the cloud providers' economies of scale would kick in. jamie crawley now reports that amazon offers mining in the cloud for new chia cryptocurrency. may , at : am david. said... david gerard's chia is a new way to waste resources for cryptocurrency is the summary of the rollout of the chia network that i wish i would have written. go read it. may , at : pm david. said... chia cryptocurrency, started by bittorrent creator bram cohen, engaging in obnoxiously bogus trademark bullying by mike masnick reports on chia's efforts to make friends and influence people. june , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ▼  march ( ) flash vs. disk (again) bitcoin: the future world currency? bad blockchain content proofs of space pre-publication peer review subtracts value ethics and archiving the web the "grand challenges" of curation and preservation techno-hype part . archival media: not a good business ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. visit the collection | barnes foundation skip to content skip to footer barnes what’s on plan your visit our collection take a class barnes what’s on plan your visit our collection take a class please correct your errors search search collection search enter a search term suggested terms careers contact shop internship membership parking restaurant tickets main menu what’s on plan your visit our collection take a class about support teachers careers press shop host an event arboretum permanent collection the barnes collection ongoing a story behind every object. share it share with facebook (opens in a new window) share with twitter (opens in a new window) share with pinterest (opens in a new window) share via email (opens in a your email application) copy url #seeingthebarnes william james glackens. the raft (detail), . bf . public domain. adults $ ; students $ ; members free. become a member buy tickets about the collection the barnes is home to one of the world’s greatest collections of impressionist, post-impressionist, and early modern paintings, with especially deep holdings in renoir, cézanne, matisse, and picasso. assembled by dr. albert c. barnes between and , the collection also includes important examples of african art, native american pottery and jewelry, pennsylvania german furniture, american avant-garde painting, and wrought-iron metalwork. the minute you step into the galleries of the barnes collection, you know you’re in for an experience like no other. masterpieces by vincent van gogh, henri matisse, and pablo picasso hang next to ordinary household objects—a door hinge, a spatula, a yarn spinner. on another wall, you might see a french medieval sculpture displayed with a navajo textile. these dense groupings, in which objects from different cultures, time periods, and media are all mixed together, are what dr. barnes called his “ensembles.” the ensembles, each one meticulously crafted by dr. barnes himself, are meant to draw out visual similarities between objects we don’t normally think of together. created as teaching tools, they were essential to the educational program dr. barnes developed in the s. the main gallery upon entering the barnes foundation collection. © michael moran/otto one of the first paintings purchased by dr. barnes. vincent van gogh. the postman (joseph-Étienne roulin), . bf . public domain. dr. barnes began collecting in . after making a fortune in the pharmaceutical business, he turned his attention to building “the greatest modern art collection” of his time. in february of that year, he sent his friend, the artist william glackens, to paris with instructions to bring back paintings by the french avant-garde. glackens returned with works, including van gogh’s the postman ( ) and picasso’s young woman holding a cigarette ( ). dr. barnes quickly established himself as a bold and ambitious collector, traveling frequently to new york and paris, buying from dealers and sometimes directly from artists. over the course of four decades, he assembled what is now considered one of the world's greatest collections of impressionist, post-impressionist, and early modern european paintings, with works by paul cézanne, henri matisse, pablo picasso, amedeo modigliani, pierre-auguste renoir, and chaïm soutine. great paintings at the barnes the collection has the world's largest holdings of paintings by renoir ( ) and cézanne ( ), as well as significant works by matisse, picasso, modigli­ani, van gogh, and other renowned artists. henri matisse. le bonheur de vivre, – . bf . © succession h. matisse / artists rights society (ars), new york paul cézanne. the card players (les joueurs de cartes), – . bf . public domain. georges seurat. models (poseuses), – . bf . public domain. amedeo modigliani. jeanne hébuterne, . bf . public domain. pablo picasso. young woman holding a cigarette (jeune femme tenant une cigarette), . bf . © estate of pablo picasso / artists rights society (ars), new york pierre-auguste renoir. mussel-fishers at berneval (pêcheuses de moules à berneval, côte normand). . bf . public domain. claude monet. the studio boat (le bateau-atelier), . bf . © estate of claude monet in , dr. barnes chartered the foundation as an educational institution for teaching people how to look at art. he was inspired by the writings of philosopher john dewey, who emphasized the importance of education in a truly democratic society, and he decided to devote his whole collection to the project. he commissioned architect paul cret to design a gallery (the original building is still in merion); he hired a teaching staff, including the legendary violette de mazia; and the barnes foundation opened for classes in . meanwhile, dr. barnes continued to build his collection. though still focused on european modernism, his interests extended into other areas as well. in the early s, he added a stunning group of african masks and sculptures to his collection—there are over —which he acquired from the french art dealer paul guillaume. he also began to purchase native american pottery, jewelry, and textiles; old master paintings; ancient egyptian, greek, and roman art; and american and european decorative and industrial arts, including wrought-iron objects. dr. barnes continued collecting until his death in . the wall ensembles are still arranged exactly as he left them. dr. barnes, in his merion gallery. the ensembles created by dr. barnes combine art and craft, cosmopolitan and provincial styles, and objects from across periods and cultures. © michael moran/otto learn more about the collection barnes focus, our mobile gallery guide enhance your experience on-site with our smartphone guide, which offers information and stories about the art and objects in the collection. the collection online the design of our collection online was inspired by dr. barnes and his approach to looking at art. you can browse , -plus objects by color, light, line, and space, making unexpected and exciting connections between pieces from different eras, places, and cultures. research notes the barnes has a team of curators, scholars, conservators, and archivists actively engaged in research about the works in our galleries. read about some of our most recent discoveries and theories. library, archives, and special collections want to learn more about dr. barnes, his collection, and the barnes foundation? our archives, art library, and manuscript and rare book collections are rich research resources. conservation our conservation team has the difficult but rewarding task of caring for the art collection. if you have questions about the barnes collection, please email us. your support helps research and conservation at the barnes, so we can present exhibitions and events. donate become a member location benjamin franklin parkway philadelphia, pa . . get directions hours thu–mon am –  pm members: am – pm newsletter please correct your errors enter your e-mail address subscribe please enter a valid email address processing your request… thanks for subscribing to our newsletter useful links accessibility terms & conditions privacy policy non-discrimination copyright & image licensing find us on social media site by area our covid- guidelines have changed to reflect current guidance from the city of philadelphia. masks are required for all. mark kjv bible > kjv > mark ◄ mark ► king james bible  par ▾  the parable of the sower (matthew : - ; luke : - ) and he began again to teach by the sea side: and there was gathered unto him a great multitude, so that he entered into a ship, and sat in the sea; and the whole multitude was by the sea on the land. and he taught them many things by parables, and said unto them in his doctrine, hearken; behold, there went out a sower to sow: and it came to pass, as he sowed, some fell by the way side, and the fowls of the air came and devoured it up. and some fell on stony ground, where it had not much earth; and immediately it sprang up, because it had no depth of earth: but when the sun was up, it was scorched; and because it had no root, it withered away. and some fell among thorns, and the thorns grew up, and choked it, and it yielded no fruit. and other fell on good ground, and did yield fruit that sprang up and increased; and brought forth, some thirty, and some sixty, and some an hundred. and he said unto them, he that hath ears to hear, let him hear. the purpose of jesus' parables (matthew : - ) and when he was alone, they that were about him with the twelve asked of him the parable. and he said unto them, unto you it is given to know the mystery of the kingdom of god: but unto them that are without, all these things are done in parables: that seeing they may see, and not perceive; and hearing they may hear, and not understand; lest at any time they should be converted, and their sins should be forgiven them. the parable of the sower explained (matthew : - ) and he said unto them, know ye not this parable? and how then will ye know all parables? the sower soweth the word. and these are they by the way side, where the word is sown; but when they have heard, satan cometh immediately, and taketh away the word that was sown in their hearts. and these are they likewise which are sown on stony ground; who, when they have heard the word, immediately receive it with gladness; and have no root in themselves, and so endure but for a time: afterward, when affliction or persecution ariseth for the word's sake, immediately they are offended. and these are they which are sown among thorns; such as hear the word, and the cares of this world, and the deceitfulness of riches, and the lusts of other things entering in, choke the word, and it becometh unfruitful. and these are they which are sown on good ground; such as hear the word, and receive it, and bring forth fruit, some thirtyfold, some sixty, and some an hundred. the lesson of the lamp (luke : - ) and he said unto them, is a candle brought to be put under a bushel, or under a bed? and not to be set on a candlestick? for there is nothing hid, which shall not be manifested; neither was any thing kept secret, but that it should come abroad. if any man have ears to hear, let him hear. and he said unto them, take heed what ye hear: with what measure ye mete, it shall be measured to you: and unto you that hear shall more be given. for he that hath, to him shall be given: and he that hath not, from him shall be taken even that which he hath. the seed growing secretly and he said, so is the kingdom of god, as if a man should cast seed into the ground; and should sleep, and rise night and day, and the seed should spring and grow up, he knoweth not how. for the earth bringeth forth fruit of herself; first the blade, then the ear, after that the full corn in the ear. but when the fruit is brought forth, immediately he putteth in the sickle, because the harvest is come. the parable of the mustard seed (matthew : - ; luke : - ) and he said, whereunto shall we liken the kingdom of god? or with what comparison shall we compare it? it is like a grain of mustard seed, which, when it is sown in the earth, is less than all the seeds that be in the earth: but when it is sown, it groweth up, and becometh greater than all herbs, and shooteth out great branches; so that the fowls of the air may lodge under the shadow of it. and with many such parables spake he the word unto them, as they were able to hear it. but without a parable spake he not unto them: and when they were alone, he expounded all things to his disciples. jesus stills the storm (matthew : - ; luke : - ) and the same day, when the even was come, he saith unto them, let us pass over unto the other side. and when they had sent away the multitude, they took him even as he was in the ship. and there were also with him other little ships. and there arose a great storm of wind, and the waves beat into the ship, so that it was now full. and he was in the hinder part of the ship, asleep on a pillow: and they awake him, and say unto him, master, carest thou not that we perish? and he arose, and rebuked the wind, and said unto the sea, peace, be still. and the wind ceased, and there was a great calm. and he said unto them, why are ye so fearful? how is it that ye have no faith? and they feared exceedingly, and said one to another, what manner of man is this, that even the wind and the sea obey him? king james bible text courtesy of bibleprotector.com section headings courtesy int bible © , used by permission bible hub library tech talk - u-m library library tech talk - u-m library technology innovations and project updates from the u-m library i.t. division digital collections completed july - june digital content & collections (dcc) relies on content and subject experts to bring us new digital collections.from july to jun , our digital collections received . million views. during the pandemic, when there was an increased need for digital resources, usage of the digital collections jumped to . million views (july -jun ) and million views (july -june ). thank you to the many people, too numerous to reasonably list here, who are involved not just in the creation of these digital collections but in the continued maintenance of these and hundreds of other digital collections that reach users around the world to advance research and provide access to materials. library it services portfolio academic library service portfolios are mostly a mix of big to small strategic initiatives and tactical projects. systems developed in the past can become a durable bedrock of workflows and services around the library, remaining relevant and needed for five, ten, and sometimes as long as twenty years. there is, of course, never enough time and resources to do everything. the challenge faced by library it divisions is to balance the tension of sustaining these legacy systems while continuing to innovate and develop new services. the university of michigan’s library it portfolio has legacy systems in need of ongoing maintenance and support, in addition to new projects and services that add to and expand the portfolio. we, at michigan, worked on a process to balance the portfolio of services and projects for our library it division. we started working on the idea of developing a custom tool for our needs since all the other available tools are oriented towards corporate organizations and we needed a light-weight tool to support our process. we went through a complete planning process first on whiteboards and paper, then developed an open source tool called tracc for helping us with portfolio management. keys to a dazzling library website redesign the u-m library launched a completely new primary website in july after years of work. the redesign project team focused on building a strong team, internal communication, content strategy, and practicing needs informed design and development to make the project a success. sweet sixteen: digital collections completed july - june digital content & collections (dcc) relies on content and subject experts to bring us new digital collections. this year, digital collections were created or significantly enhanced. here you will find links to videos and articles by the subject experts speaking in their own words about the digital collections they were involved in and why they found it so important to engage in this work with us. thank you to all of the people involved in each of these digital collections! adding ordered metadata fields to samvera hyrax how to add ordered metadata fields in samvera hyrax. includes example code and links to actual code. sinking our teeth into metadata improvement like many attempts at revisiting older materials, working with a couple dozen volumes of dental pamphlets started very simply but ended up being an interesting opportunity to explore the challenges of making the diverse range of materials held in libraries accessible to patrons in a digital environment. and while improving metadata may not sound glamorous, having sufficient metadata for users to be able to find what they are looking for is essential for the utility of digital libraries. collaboration and generosity provide the missing issue of the american jewess what started with a bit of wondering and conversation within our unit of the library led to my reaching out to princeton university with a request but no expectations of having that request fulfilled. individuals at princeton, however, considered the request and agreed to provide us with the single issue of the american jewess that we needed to complete the full run of the periodical within our digital collection. especially in these stressful times, we are delighted to bring you a positive story, one of collaboration and generosity across institutions, while also sharing the now-complete digital collection itself. how to stop being negative, or digitizing the harry a. franck film collection this article reviews how , + frames of photographic negatives from the harry a. franck collection are being digitally preserved. combine metadata harvester: aggregate all the data! the digital public library of america (dpla) has collected and made searchable a vast quantity of metadata from digital collections all across the country. the michigan service hub works with cultural heritage institutions throughout the state to collect their metadata, transform those metadata to be compatible with the dpla’s online library, and send the transformed metadata to the dpla, using the combine aggregator software, which is being developed here at the u of m library. hacks with friends retrospective: a pitch to hitch in when the students go on winter break i go to hacks with friends (hwf) and highly recommend and encourage everyone who can to participate in hwf . not only is it two days of free breakfast, lunch, and snacks at the ross school of business, but it’s a chance to work with a diverse cross section of faculty, staff, and students on innovative solutions to complex problems. commonplace.net – data. the final frontier. skip to content commonplace.net data. the final frontier. publications a common place all posts about contact infrastructure for heritage institutions – open and linked data june , june , lukas kosterdata, infrastructure, library in my june post in this series, “infrastructure for heritage institutions – change of course  ” , i said: “the results of both data licences and the data quality projects (object pid’s, controlled vocabularies, metadata set) will go into the new data publication project, which will be undertaken in the second half of . this project is aimed at publishing our collection data as open and linked data in various formats via various channels. a […] read more infrastructure for heritage institutions – ark pid’s november , november , lukas kosterdata, infrastructure, library in the digital infrastructure program at the library of the university of amsterdam we have reached a first milestone. in my previous post in the infrastructure for heritage institutions series, “change of course“, i mentioned the coming implementation of ark persistent identifiers for our collection objects. since november , , ark pid’s are available for our university library alma catalogue through the primo user interface. implementation of ark pid’s for the other collection description systems […] read more infrastructure for heritage institutions – change of course june , lukas kosterdata, infrastructure, library in july i published the first post about our planning to realise a “coherent and future proof digital infrastructure” for the library of the university of amsterdam. in february i reported on the first results. as frequently happens, since then the conditions have changed, and naturally we had to adapt the direction we are following to achieve our goals. in other words: a change of course, of course.  projects  i will leave aside the […] read more infrastructure for heritage institutions – first results february , february , lukas kosterdata, infrastructure, library in july i published the post infrastructure for heritage institutions in which i described our planning to realise a “coherent and future proof digital infrastructure” for the library of the university of amsterdam. time to look back: how far have we come? and time to look forward: what’s in store for the near future? ongoing activities i mentioned three “currently ongoing activities”:  monitoring and advising on infrastructural aspects of new projects maintaining a structured dynamic overview […] read more infrastructure for heritage institutions july , january , lukas kosterdata, infrastructure, library during my vacation i saw this tweet by liber about topics to address, as suggested by the participants of the liber conference in dublin: it shows a word cloud (yes, a word cloud) containing a large number of terms. i list the ones i can read without zooming in (so the most suggested ones, i guess), more or less grouped thematically: open scienceopen dataopen accesslicensingcopyrightslinked open dataopen educationcitizen science scholarly communicationdigital humanities/dhdigital scholarshipresearch assessmentresearch […] read more ten years linked open data june , february , lukas kosterdata, library this post is the english translation of my original article in dutch, published in meta ( - ), the flemish journal for information professionals. ten years after the term “linked data” was introduced by tim berners-lee it appears to be time to take stock of the impact of linked data for libraries and other heritage institutions in the past and in the future. i will do this from a personal historical perspective, as a library technology professional, […] read more maps, dictionaries and guidebooks august , february , lukas kosterdata interoperability in heterogeneous library data landscapes libraries have to deal with a highly opaque landscape of heterogeneous data sources, data types, data formats, data flows, data transformations and data redundancies, which i have earlier characterized as a “data maze”. the level and magnitude of this opacity and heterogeneity varies with the amount of content types and the number of services that the library is responsible for. academic and national libraries are possibly dealing with more […] read more standard deviations in data modeling, mapping and manipulation june , february , lukas kosterdata or: anything goes. what are we thinking? an impression of elag this year’s elag conference in stockholm was one of many questions. not only the usual questions following each presentation (always elicited in the form of yet another question: “any questions?”). but also philosophical ones (why? what?). and practical ones (what time? where? how? how much?). and there were some answers too, fortunately. this is my rather personal impression of the event. for a […] read more analysing library data flows for efficient innovation november , february , lukas kosterlibrary in my work at the library of the university of amsterdam i am currently taking a step forward by actually taking a step back from a number of forefront activities in discovery, linked open data and integrated research information towards a more hidden, but also more fundamental enterprise in the area of data infrastructure and information architecture. all for a good cause, for in the end a good data infrastructure is essential for delivering high […] read more looking for data tricks in libraryland september , january , lukas kosterlibrary ifla annual world library and information congress lyon – libraries, citizens, societies: confluence for knowledge after attending the ifla library linked data satellite meeting in paris i travelled to lyon for the first three days (august - ) of the ifla annual world library and information congress. this year’s theme “libraries, citizens, societies: confluence for knowledge” was named after the confluence or convergence of the rivers rhône and saône where the city of […] read more posts navigation older posts profiles and social @lukask on twitter @lukask on mastodon my orcid my impactstory my zotero my uva profile recent posts infrastructure for heritage institutions – open and linked data infrastructure for heritage institutions – ark pid’s infrastructure for heritage institutions – change of course infrastructure for heritage institutions – first results infrastructure for heritage institutions ten years linked open data most popular posts is an e-book a book? ( , views) who needs marc? ( , views) linked data for libraries ( , views) mobile app or mobile web? ( , views) user experience in public and academic libraries ( , views) mainframe to mobile ( , views) (discover and deliver) or else ( , views) recent comments maarten brinkerink on infrastructure for heritage institutions gittaca on infrastructure for heritage institutions libraries & the future of scholarly communication at #btpdf – uc portal on beyond the library tatiana bryant (@bibliotecariat) on analysing library data flows for efficient innovation @bibliotecariat on analysing library data flows for efficient innovation @lizwoolcott on analysing library data flows for efficient innovation tags apps authority files catalog collection conferences cultural heritage data data management developer platforms discovery tools elag exlibris foaf frbr hardware heritage identifiers igelu infrastructure innovation integration interoperability libraries library library . library systems linked data linked open data marc meetings metadata mobile next generation open data open source open systems people persistent identifiers rda rdf semantic web social networking technology uri web . this work is licensed under a creative commons attribution-sharealike . international license. system log in entries feed comments feed wordpress.org top posts & pages infrastructure for heritage institutions - open and linked data infrastructure for heritage institutions - ark pid's infrastructure for heritage institutions - change of course infrastructure for heritage institutions - first results infrastructure for heritage institutions ten years linked open data maps, dictionaries and guidebooks contact standard deviations in data modeling, mapping and manipulation analysing library data flows for efficient innovation privacy & cookies: this site uses cookies. by continuing to use this website, you agree to their use. to find out more, including how to control cookies, see here: cookie policy commonplace.net a common place about all posts contact publications powered by wordpress | theme: astrid by athemes. pricing // baserow product premium pricing templates developers documentation openapi specification api blog jobs contact repository github sponsor login register product premium pricing templates developers documentation getting started start baserow locally creating a plugin openapi specification api gitlab repository want to contribute? blog jobs contact become a sponsor gitlab repository login register pricing hosted version free for now create an account self hosted always free installation instructions early premium € per user / month more information not yet available. can be used in combination with the hosted and self hosted version in the future. price might change. usage groups unlimited unlimited unlimited databases unlimited unlimited unlimited tables unlimited unlimited unlimited rows unlimited unlimited unlimited features web app dashboard filtering sorting public rest api api token permissions search templates csv, xml and json import csv export trash web hooks footer calculations public view sharing xml and json export role based permissions row comments row coloring views grid form gallery kanban calendar survey fields single line text long text link to table number boolean date url email phone file multi select formula lookup created last modified admin admin panel sso signup rules audit logs payment by invoice support user support within business days optionally optionally technical support optionally optionally create account instructions more information hosted version free for now create an account unlimited groups unlimited databases unlimited tables unlimited rows features web app dashboard filtering sorting public rest api api token permissions search templates csv, xml and json import csv export trash web hooks footer calculations public view sharing views grid form gallery fields single line text long text link to table number boolean date url email phone file multi select formula lookup created last modified support user support within business days create account self hosted always free installation instructions unlimited groups unlimited databases unlimited tables unlimited rows features web app dashboard filtering sorting public rest api api token permissions search templates csv, xml and json import csv export trash web hooks footer calculations public view sharing views grid form gallery fields single line text long text link to table number boolean date url email phone file multi select formula lookup created last modified support optionally user support optionally technical support instructions early premium € per user / month not yet available. can be used in combination with the hosted and self hosted version in the future. price might change. more information unlimited groups unlimited databases unlimited tables unlimited rows features web app dashboard filtering sorting public rest api api token permissions search templates csv, xml and json import csv, xml and json export trash web hooks footer calculations public view sharing role based permission row comments row coloring views grid form gallery kanban calendar survey fields single line text long text link to table number boolean date url email phone file multi select formula lookup created last modified admin admin panel sso signup rules audit logs payment by invoice support optionally user support optionally technical support more information is already implemented and can be used will soon be implemented and cannot be used additional services next to our self hosted and premium version we also offer a two additional services that will make your life even easier. support € per user / month for companies who need a little bit of extra help priority support. user questions. technical questions. contact us technical consulting € per hour for companies with advanced needs. on premise installation help. on premise updating help. advanced technical help. contact us log in register contact gitlab repository sponsor twitter product premium pricing developer documentation openapi specification api blog july release of baserow best airtable alternatives best excel alternatives under the hood of baserow building a database legal privacy policy terms & conditions newsletter stay up to date with the lates developments and releases by signing up for our newsletter. sign up © copyright baserow all rights reserved. wary of bitcoin? a guide to some other cryptocurrencies | ars technica skip to main content biz & it tech science policy cars gaming & culture store forums subscribe close navigate store subscribe videos features reviews rss feeds mobile site about ars staff directory contact us advertise with ars reprints filter by topic biz & it tech science policy cars gaming & culture store forums settings front page layout grid list site theme black on white white on black sign in comment activity sign up or login to join the discussions! stay logged in | having trouble? sign up to comment and more sign up biz & it — wary of bitcoin? a guide to some other cryptocurrencies you can fill your virtual pockets with litecoin, ppcoin, or freicoin. ian steadman, wired.co.uk - may , : pm utc ppcoin (ppc) site: http://www.ppcoin.org/ launch date: august , number of ppcoins circulation: unknown eventual ppcoin total: no cap market cap: unknown ppcoin logo concepts. mjbmonetarymetals, bitcoin forum peer-to-peer coin, or ppcoin for short, presents itself as an improvement upon bitcoin by changing one of the latter's fundamental ideas: proof-of-work. beyond improving security—it’s a lot harder to steal ppcoin than bitcoin this way—it reduces the chance of a percent attack by making the counterfeiting of coins extremely difficult. in bitcoin, as with all these coins, the supply of coins is stable and predetermined, and the rate at which they are generated decreases exponentially. the cost of mining has now risen such that people can't really use their home tablets, laptops, or desktops. instead, they have to rely on application-specific integrated circuit (asic) mining—expensive, dedicated rigs that often cost thousands of dollars, running / , just generating enough bitcoins to make the whole thing cost-effective. some worry that this could lead to a security issue in the future. harder mining means fewer people bother to dedicate the effort and time, and fewer miners means that the overall network of nodes decreases. it's possible that the number could decline to such an extent that bitcoin, as massive as it may become, could be open to a percent attack on the blockchain. the determining factor in which blockchain becomes the "real" one and which is discarded comes down to a simple rule—whichever blockchain is the one accepted by the most number of mining nodes becomes the canonical one. in a percent attack, someone takes over enough nodes to effectively dictate that their own version of the blockchain is accepted over the legitimate one. if that happens, it becomes possible to counterfeit bitcoins or (even worse) to spend them more than once. it's a serious threat—lots of currencies have been taken down before they've even had a chance to stand on their own feet in this way. ppcoin's solution to this is to slightly alter what the blockchain records. in bitcoin, a "proof-of-work" is attached to each block as it's generated. it verifies the ownership of the block by the person who mined it, and future transactions use it as an identifying marker. in ppcoin, a further piece of information is included—"proof-of-stake." think of it this way. if you've had a single proof-of-work in your wallet for one day—you could say that you have one coin-day in your wallet. one coin in your wallet for a week gives you seven coin-days; three coins in your wallet for days gives coin-days. it's simply the number of coins multiplied by the time held, which is determined by the addition of a time stamp to the coin's information. it gives someone not just a proof-of-work, but a proof-of-stake. beyond improving security—it's a lot harder to steal ppcoin than bitcoin this way—it reduces the chance of a percent attack by making the counterfeiting of coins extremely difficult. you have to gain percent of all proofs-of-stake instead of mining power. another radical difference is that, unlike with bitcoin, there is no final cap set on the number of ppcoins that will be generated. instead, the combination of proof-of-work mining (as with bitcoin) and proof-of-stake mining (which comes from using coins for transactions) gives the currency a steady growth in size that, its developers claim, equals roughly one percent per year. advertisement as proof-of-work mining becomes more difficult and the number of miners drops off, it's expected that proof-of-stake mining will become the dominant form of mining in ppcoin, increasing the supply of coin-days rather than coins that can be spent. currently, ppcoin has a centralized checking system in place to verify transactions, so it doesn't qualify as decentralized in the same way that bitcoin does. this is, the ppcoin developers have said, only a temporary measure required until "the network matures." btc-e and cryptonit are two of the main exchanges that accept ppcoin. its current value is around . btc ($ . ) per ppcoin. freicoin (frc) site: http://freico.in/ launch date: december , number of freicoins in circulation: unknown eventual freicoin total:  million market cap: unknown freicoin is inspired by the work of economist silvio gesell. public domain freicoin is an interesting alternative—with a distinctive philosophical framework—to other cryptocurrencies. it has a demurrage fee built into the system. "demurrage" isn't something we usually associate with money. it usually means the cost of holding something for a long time, like the price for the storage of gold. in this context, though, it's a deliberate tax on savings. think of it as inflation controlled through taxes on a stable money supply rather than an untaxed money supply that expands slowly but steadily, as we're perhaps used to with normal currencies. freicoin developer mark friedenbach told wired.co.uk through e-mail what this means: "[demurrage] can be thought of as causing freicoins to rot, reducing them in value by ~ . percent per year. now to answer the question as to why anybody would want that, you have to look at the economy as a whole. demurrage causes consumers and merchants both to spend or invest coins they don't immediately need, as quickly as possible, driving up gdp. further, this effect is continuous with little seasonal adjustments, so one can expect business cycles to be smaller in magnitude and duration. with demurrage, one saves money by making safe investments rather than letting money sit under the mattress." if you look at the problem with bitcoin's bubble, it's easy to see why this kind of thing would be attractive for someone wanting a currency with a stable, predictable value. many pundits have argued that bitcoin will always have a deeply unstable price as the money supply is limited and grows slower every minute—if you're holding bitcoins, and you know that they'll be worth more in a week than right now, your incentive is to hold on to your money instead of spending it. nobody buys anything, and the bitcoin economy slows to a halt. demurrage compensates for this deflation—you would be a fool to store large sums of money in freicoin according to friedenbach. "demurrage eliminates what is called 'time-value preference'—the unsustainable nature of our culture to want things now rather than in the future, or at least spread out over time, such as the clear cutting of forests versus sustainable harvesting. demurrage acts to lessen the desires of the present in order to meet the needs of the future as money is 'worth more' the longer you delay in receiving it. this leads to sustainable economic choices." advertisement he cites real-world examples of demurrage fees, such as the "miracle of wörgl." the proposal to use demurrage deliberately, as a way to force the circulation of money and stimulate the economy, was first proposed by economist and anarchist silvio gesell. the mayor of the austrian town of wörgl instigated scraps of paper known as "freigeld" with demurrage in during the great depression. the experiment led to a rise in employment and the local gdp until it was stopped by the austrian central bank in . beyond demurrage, freicoin works pretty much the same as the basic bitcoin framework—new blocks roughly every minutes, with the same difficulty and hashing algorithm. the final total of coins will be greater, however, at million. freicoin's developers are also pushing for new features for their cryptocurrency to mark it out as different from the others, friedenbach said. "we have created the freicoin foundation which is a registered non-profit responsible for distributing percent of the initial coins to charitable or mutually beneficial projects. we are making a variety of technical improvements to the bitcoin protocol, which may eventually find their way upstream." "we are also working on new features that are probably too controversial to be worked into bitcoin presently, such as the addition of 'freicoin assets,' a mechanism for issuing your own tokens for whatever purpose (stocks, bonds, lines of credit, etc.) and trading these tokens on a peer-to-peer exchange. we are also planning to extend freicoin to include a variety of voting mechanisms in a proposal for distributed governance we are calling 'republicoin.'" freicoin's radical demurrage concept has marked it out for a lot of criticism, unsurprisingly. you can see on the developer's forum the number of discussions taking place about it (and at least one group has tried to take over freicoin with their own new fork, removing the percent freicoin foundation subsidy). after all, while bitcoin might be unstable because of price speculation and deflation, that same increase in value is what drew attention, and therefore users, to bitcoin first. the demurrage fee, taken from every transaction, is redistributed mainly to miners of new blocks. the freicoin foundation that friedenbach mentioned is controversial because for the first three years, percent of the demurrage fees will be siphoned off to this central fund to be sent forward to other people or organizations in a bid to get the currency traction outside its small community. however, this central fund goes against what many regard as the key point of cryptocurrencies—nobody is in control and they are completely decentralized. assurances that the foundation will be "democratic" and open for any freicoin user to join and vote on the use of funds may not reassure some people. freicoin is currently traded on vircurex and bter. the price per freicoin price is roughly . btc ($ . ). others there have already been several cryptocurrencies that have been born, lived stuttering lives, and died because they offer nothing substantial beyond bitcoin. their names—solidcoin, bbqcoin, fairbrix, and geistgeld, to name some—are now footnotes on the bitcoin wiki. their networks are ghosts with nodes that flicker on and off only intermittently. several of them were taken down by percent attacks, while others simply never enjoyed support from a large enough community. the balance between commodity speculation to drive price and merchants to drive transactions is a hard one to build. legitimate currencies that are still working, and which have fans and active support communities, include namecoin, terracoin, and feathercoin. however, cryptocurrencies are fast-moving and unpredictable in the long term—it's hard to state with confidence that any one currency, including bitcoin itself, will be here next year or even next week. their instability and their unpredictability is part of the design. by all means, if you have the money, invest in something intangible like an algorithm in the hope that it will become a viable payment method. just be aware that it's a risky way to try to make money—you might be better off sticking to the stock market. this story originally appeared on wired uk. page: reader comments with posters participating share this story share on facebook share on twitter share on reddit advertisement you must login or create an account to comment. channel ars technica ← previous story next story → related stories today on ars store subscribe about us rss feeds view mobile site contact us staff advertise with us reprints newsletter signup join the ars orbital transmission mailing list to get weekly updates delivered to your inbox. sign me up → cnmn collection wired media group © condé nast. all rights reserved. use of and/or registration on any portion of this site constitutes acceptance of our user agreement (updated / / ) and privacy policy and cookie statement (updated / / ) and ars technica addendum (effective / / ). ars may earn compensation on sales from links on this site. read our affiliate link policy. your california privacy rights | do not sell my personal information the material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of condé nast. ad choices dshr's blog: economies of scale in peer-to-peer networks dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, october , economies of scale in peer-to-peer networks in a recent ieee spectrum article entitled escape from the data center: the promise of peer-to-peer cloud computing, ozalp babaoglu and moreno marzolla (bm) wax enthusiastic about the potential for peer-to-peer (p p) technology to eliminate the need for massive data centers. even more exuberance can be found in natasha lomas' techcrunch piece the server needs to die to save the internet (lm) about the maidsafe p p storage network. i've been working on p p technology for more than years, and although i believe it can be very useful in some specific cases, i'm far less enthusiastic about its potential to take over the internet. below the fold i look at some of the fundamental problems standing in the way of a p p revolution, and in particular at the issue of economies of scale. after all, i've just written a post about the huge economies that facebook's cold storage technology achieves by operating at data center scale. economies of scale back in april, discussing a vulnerability of the bitcoin network, i commented: gradually, the economies of scale you need to make money mining bitcoin are concentrating mining power in fewer and fewer hands. i believe this centralizing tendency is a fundamental problem for all incentive-compatible p p networks. ... after all, the decentralized, distributed nature of bitcoin was supposed to be its most attractive feature. in june, discussing permacoin, i returned to the issue of economies of scale: increasing returns to scale (economies of scale) pose a fundamental problem for peer-to-peer networks that do gain significant participation. one necessary design goal for networks such as bitcoin is that the protocol be incentive-compatible, or as ittay eyal and emin gun sirer (es) express it: the best strategy of a rational minority pool is to be honest, and a minority of colluding miners cannot earn disproportionate benefits by deviating from the protocol they show that the bitcoin protocol was, and still is, not incentive-compatible. even if the protocol were incentive-compatible, the implementation of each miner would, like almost all technologies, be subject to increasing returns to scale. since then i've become convinced that this problem is indeed fundamental. the simplistic version of the problem is this: the income to a participant in a p p network of this kind should be linear in their contribution of resources to the network. the costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale. thus the proportional profit margin a participant obtains will increase with increasing resource contribution. thus the effects described in brian arthur's increasing returns and path dependence in the economy will apply, and the network will be dominated by a few, perhaps just one, large participant. the advantages of p p networks arise from a diverse network of small, roughly equal resource contributors. thus it seems that p p networks which have the characteristics needed to succeed (by being widely adopted) also inevitably carry the seeds of their own failure (by becoming effectively centralized). bitcoin is an example of this. some questions arise: does incentive-compatibility imply income linear in contribution? if not, are there incentive-compatible ways to deter large contributions? the simplistic version is, in effect, a static view of the network. are there dynamic effects also in play? does incentive-compatibility imply income linear in contribution? clearly, the reverse is true. if income is linear in, and solely dependent upon, contribution there is no way for a colluding minority of participants to gain more than their just reward. if, however: income grows faster than linearly with contribution, a group of participants can pool their contributions, pretend to be a single participant, and gain more than their just reward. income goes more slowly than linearly with contribution, a group of participants that colluded to appear as a single participant would gain less than their just reward. so it appears that income linear in contribution is the limiting case, anything faster is not incentive-compatible. are there incentive-compatible ways to deter large contributions? in principle, the answer is yes. arranging that income grows more slowly than contribution, and depends on nothing else, will do the trick. the problem lies in doing so. source: bitcoincharts.com the actual income received by a participant is the value of the reward the network provides in return for the contribution of resources, for example the bitcoin, less the costs incurred in contributing the resources, the capital and running costs of the mining hardware, in the bitcoin case. as the value of bitcoins collapsed (as i write, btc is about $ , down from about $ months ago and half its value in august) many smaller miners discovered that mining wasn't worth the candle. the network has to arrange not just that the reward grows more slowly than the contribution, but that it grows more slowly than the cost of the contribution to any participant. if there is even one participant whose rewards outpace their costs, brian arthur's analysis shows they will end up dominating the network. herein lies the rub. the network does not know what an individual participant's costs, or even  the average participant's costs, are and how they grow as the participant scales up their contribution. so the network would have to err on the safe side, and make rewards grow very slowly with contribution, at least above a certain minimum size. doing so would mean few if any participants above the minimum contribution, making growth dependent entirely on recruiting new participants. this would be hard because their gains from participation would be limited to the minimum reward. it is clear that mass participation in the bitcoin network was fuelled by the (unsustainable) prospect of large gains for a small investment. source: blockchain.info a network that assured incentive-compatibility in this way would not succeed, because the incentives would be so limited. a network that allowed sufficient incentives to motivate mass participation, as bitcoin did, would share bitcoin's vulnerability to domination by, as at present, two participants (pools, in bitcoin's case). are there dynamic effects also in play? as well as increasing returns to scale, technology markets exhibit decreasing returns through time. bitcoin is an extreme example of this. investment in bitcoin mining hardware has a very short productive life: the overall network hash rate has been doubling every - weeks, and therefore, mining equipment has been losing half its production capability within the same time frame. after - weeks ( halvings), mining rigs lose . % of their value. this effect is so strong that it poses temptations for the hardware manufacturers that some have found impossible to resist. the fbi recently caught butterfly labs using hardware that customers had bought and paid for to mine on their own behalf for a while before shipping it to the customers. they thus captured the most valuable week or so of the hardware's short useful life for themselves source: blockchain.info even though with technology improvement rates much lower than the bitcoin network hash rate increase, such as moore's law or kryder's law, the useful life of hardware is much longer than months, this effect can be significant. when new, more efficient technology is introduced, thus reducing the cost per unit contribution to a p p network, it does not become instantly available to all participants. as manufacturing ramps up, the limited supply preferentially goes to the manufacturers best customers, who would be the largest contributors to the p p network. by the time supply has increased so that smaller contributors can enjoy the lower cost per unit contribution, the most valuable part of the technology's useful life is over. early availability of new technology acts to reduce the costs of the larger participants, amplifying their economies of scale. this effect must be very significant in bitcoin mining, as butterfly labs noticed. at pre- kryder rates it would be quite noticeable since storage media service lives were less than months. at the much lower kryder rates projected by the industry storage media lifetimes will be extended and the effect correspondingly less. trust bm admit that there are significant unresolved trust issues in p p technology: the people using such a cloud must trust that none of the many strangers operating it will do something malicious. and the providers of equipment must trust that the users won’t hog computer time. these are formidable problems, which so far do not have general solutions. if you just want to store data in a p p cloud, though, things get easier: the system merely has to break up the data, encrypt it, and store it in many places. unfortunately, even for storage this is inadequate. the system cannot trust the peers claiming to store the shards of the encrypted data but must verify that they actually are storing them. this is a resource-intensive process. permacoin's proposal, to re-purpose resources already being expended elsewhere, is elegant but unlikely to be successful. worse, the verification process consumes not just resources, but time. at each peer there is necessarily a window of time between successive verifications. during that time the system believes the peer has a good copy of the shard, but it might no longer have one. edge of the internet p p enthusiasts describe the hardware from which their network is constructed in similar terms. here is bm: the p p cloud is made up of a diverse collection of different people’s computers or game consoles or whatever and here is lm: users of maidsafe’s network contribute unused hard drive space, becoming the network’s nodes. it’s that pooling — or, hey, crowdsourcing — of many users’ spare computing resource that yields a connected storage layer that doesn’t need to centralize around dedicated datacenters. when the idea of p p networks started in the s: their model of the edge of the internet was that there were a lot of desktop computers, continuously connected and powered-up, with low latency and no bandwidth charges, and with . " hard disks that were mostly empty. since then, the proportion of the edge with these characteristics has become vanishingly small. the edge is now intermittently powered up and connected, with bandwidth charges, and only small amounts of local storage. monetary rewards this means that, if the network is to gain mass participation, the majority of participants cannot contribute significant resources to it; they don't have suitable resources to contribute. they will have to contribute cash. this in turn means that there must be exchanges, converting between the rewards for contributing resources and cash, allowing the mass of resource-poor participants to buy from the few resource-rich participants. both permacoin and maidsafe envisage such exchanges, but what they don't seem to envisage is the effect on customers of the kind of volatility seen in the bitcoin graph above. would you buy storage from a service with this price history, or from amazon? what exactly is the value to the mass customer of paying a service such as maidsafe, by buying safecoin on an exchange, instead of paying amazon directly, that would overcome the disadvantage of the price volatility? as we see with bitcoin, a network whose rewards can readily be converted into cash is subject to intense attack, and attracts participants ranging from sleazy to criminal. despite its admirably elegant architecture, bitcoin has suffered from repeated vulnerabilities. although p p technology has many advantages in resisting attack, especially the elimination of single points of failure and centralized command and control, it introduces a different set of attack vectors. measuring contributions discussion of p p storage networks tends to assume that measuring the contribution a participant supplies in return for a reward is easy. a gigabyte is a gigabyte after all. but compare two petabytes of completely reliable and continuously available storage, one connected to the outside world by a fiber connection to a router near the internet's core, and the other connected via g. clearly, the first has higher bandwidth, higher availability and lower cost per byte transferred, so its contribution to serving the network's customers is vastly higher. it needs a correspondingly greater reward. in fact, networks would need to reward many characteristics of a peer's storage contribution as well as its size: reliability availability bandwidth latency measuring each of these parameters, and establishing "exchange rates" between them, would be complex, would lead to a very mixed marketing message, and would be the subject of disputes. for example, the availability, bandwidth and latency of a network resource depends on the location in the network from which the resource is viewed, so there would be no consensus among the peers about these parameters. conclusion while it is clear that p p storage networks can work, and can even be useful tools for small communities of committed users, the non-technical barriers to widespread adoption are formidable. they have been effective in preventing widespread adoption since the late s, and the evolution of the internet has since raised additional barriers. posted by david. at : am labels: networking, p p, storage costs comments: david. said... steve randy waldman has an interesting post econometrics, open science, and cryptocurrency arguing that the infrastructure for research should be a p p network with a cryptocurrency-like consensus system. i have a lot of sympathy for his vision of a shared, preserved research infrastructure: "ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. social science should become a reified public commons. it should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses." but, alas, for the reasons set out above, it will be very difficult to implement this using p p cryptocurrency techniques. there will be significant real costs associated with running nodes in such a network. to motivate participation, there has to be some way to defray these costs, so there has to be an exchange between the cryptocurrency and currency real enough to pay electricity bills and purchase hardware. "from each according to his ability, to each according to his need" isn't going to cut it on a scale big enough to matter in research. october , at : pm david. said... my comment on waldman's post is here. october , at : pm david. said... on the general topic of digital currencies, the ft has an excellent corrective to misplaced enthusiasm, izabella kaminska's from the annals of disruptive digital currencies past. november , at : am david. said... one major problem with the centralization that is driven by economies of scale is that it reduces the resilience of the network to failures. a "data center" that was a major contributor to the overall bitcoin hash rate was just destroyed by fire. november , at : am david. said... i apologize for the mistake i made writing this post. i linked to, rather than captured, the pool size graph. blockchain.info has moved the graph, it is now here. as i write four pools, f pool, antpool, ghash.io and btcchinapool control % of the mining power. this is better than one or two pools, but, as ittay eyal points out in an important post: "the dismantling of overly large pools is one of the most important and difficult tasks facing the bitcoin community." pools are needed to generate consistent income but: "[miners] can get steady income from pools well below %, and they have only little incentive to use very large pools; it's mostly convenience and a feeling of trust in large entities." right now, at least % of the mining power is in pools larger than %. january , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ▼  october ( ) this is what an emulator should look like familiarity breeds contempt facebook's warm storage journal "quality" the internet of things economies of scale in peer-to-peer networks ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. the distant reader - covid- literature to study carrel browse carrels browse public carrels create carrels new carrel from project gutenberg from cord- sign in home new carrel create carrel from a... file zip file url list of urls biorxiv file hathitrust file gutenberg cord- cord- carrel use this page to search a collection of more than , journal articles on the topic of covid- . once you have identified a subset of the articles you would like to read more closely, you can create a study carrel from the results. the index supports an expressive query language, described in a blog posting. the index is populated with the content of a data set called cord- , a collection of scholarly scientific journal articles. the data set is updated on a regular basis, and we strive to keep the index up-to-date. stymied? enter a word, a few words, or a few words surrounded by quote marks to begin. search with thanks to our generous sponsors and partners about this project acknowledgments contact us bethany nowviskie skip to content bethany nowviskie menu bio minor arcana jmu libraries cv search for: search reconstitute the world speculative collections on capacity and care cultural memory and the peri-pandemic library posted on may may by bethany nowviskie [late last month, i was honored to deliver the annual james e. mcleod memorial lecture on higher education at washington university in st. louis. i wasn’t planning to post this one as it feels decidedly half-baked to me. but now — two weeks later — the swift lifting of coronavirus restrictions in the united states (amid so much “back to normal” rhetoric on our campuses and in state and national politics) makes me think there might be some value in sharing. this is the beginning of a project i hope to get back to.] this is a talk about the role of libraries, museums, and archives as cultural memory institutions now, at our present juncture, which i am calling peri-pandemic: that is, midstream and pushing through. but it’s also a talk about how institutions like academic and public libraries are made up of individual people — people who are themselves in a peri-pandemic moment, laying down memories, processing trauma, revivifying the past, and projecting possible futures for themselves and the people and planet they love. the personal and the organizational are always intimately connected for knowledge workers, and i will spend some time exploring that connection today. i bring this concept to the mcleod lecture, especially, because i see a particular need for new, more conscious and explicit attention to this arising not just in libraries but throughout higher ed. attention to the connection, that is, of the deeply personal with the organizational and technical. of intimacy with structure — and how we accomplish such a dangerous and potentially generative and healing linkage, when we know full well that our institutions and systems have often chewed up and spat out the individual — and that they’ve been devised to center and memorialize only certain kinds of bodies and feelings. the humane connection of intimacy with structure is the connection of the lived pasts and present experiences of everyone with the social and environmental futures that will happen to no-one by accident: the futures that are the responsibility of our institutions of cultural memory and higher education to design. true confessions: these are all just some tentative thoughts from me at the outset of what i sense is a larger project. i want to dig into the role of the peri-pandemic library and its inhabitants in bridging affect and societal impact through the work of cultural memory.  this is because — as we look increasingly clear-eyed on the massive structural and systemic challenges that will face us in the decades to come — it becomes evident that an impulse, in higher ed and cultural heritage leadership, to stay on one side or another of that personal-to-organizational equation diminishes both. challenges of linking compassion with equity and systemic reform have been brought into new focus by our people’s simultaneous experiences of loneliness and over-exposure throughout the pandemic — whether that is a fearsome viral exposure for our on-site library skeleton crews or the kind of “exposure” that leaves remote workers feeling fatigued and sick of seeing their own faces and private homes displayed after a long day on zoom. we must characterize the experience of the pandemic accurately in order to appreciate its impact — including the impact of grief — on our knowledge systems. library staff (like students and scholars) are persisting in their work through a global, mass death event. as time goes by and we look with more optimism to the future, i sense those of us in higher ed administration acknowledging this present reality less often. i fear that’s a kind of well-intentioned gaslighting that will result in alienating, not inspiring, our campus communities. and in this country, of course, the coronavirus event is happening in conjunction with our ongoing racial justice crisis in what’s been called a “twin pandemic” — which even those of us not experiencing personally and directly every moment of every day can feel happening on our campuses and in our towns, and see daily on our screens in the form of attacks on asian americans in the streets, continuing horrors at our southern border, and extrajudicial death sentences passed by police on black children and adults. in dionne brand’s words: “i know, as many do, that i’ve been living a pandemic all my life; it is structural rather than viral; it is the global state of emergency of antiblackness. what the covid- pandemic has done is expose even further the endoskeleton of the world.” these are the challenges that have our students across disciplines calling out for a broad-scale re-memorialization of the past — for a fuller past told by new voices, for statues to be pulled down and buildings re-named. it has them calling for reparations for harms already done, and for the implementation of better social systems — the creation of a structural otherwise, centered not just on policy, but around deeply personal human flourishing and joy.  continue reading “cultural memory and the peri-pandemic library” posted in administrivia, higher edtagged embodied, libraries foreword (to the past) posted on october october by bethany nowviskie congratulations to melissa terras and paul gooding on the publication of an important new collection of essays entitled electronic legal deposit: shaping the library collections of the future! this volume takes a global outlook on challenges and successes in preserving digital information, and stems from their digital library futures ahrc project, which first analyzed the impact of electronic legal deposit legislation on academic libraries and their users in the uk. more from melissa here, including “an ark to save learning from deluge? reconceptualising legal deposit after the digital turn,” an oa version of the opening chapter she & paul contributed to the collection. i was honored to be asked to write a foreword to the book, which i share here, under facet publishing’s green oa agreement, as my own author’s last copy of a single chapter from an edited collection. i thought i’d post it, particularly, now — as next week not only marks world digital preservation day, but another highly significant election day in the united states. we are four years on from the moment i describe below… on the morning of november th, , i looked out over a milwaukee ballroom crowded with librarians, archivists, and specialists in digital preservation. some were pensive. many were weeping. others seemed stricken. my audience had gathered for the first joint conference of the digital library federation (dlf, the us-based nonprofit organization i then directed) with its new partner, the national digital stewardship alliance (ndsa)—a cross-industry group that had recently come under dlf’s wing from its place of genesis at the library of congress. we were strangers and friends, largely though not exclusively american, united in a community of practice and the common cause of a dedication to the future of libraries, archives, and their holdings and information services in the digital age. but it suddenly felt as if we didn’t know what information was, and whether—despite all our efforts, expertise, and the shared infrastructure that our memory institutions represented—its future could be made secure. the unexpected outcome of the us presidential election, announced in the wee hours the night before, had cast a pall over this professional audience that crossed party lines. how could so many confident, data-driven predictions have been so wrong? what shared social understandings—built from the seeming common landscape of ubiquitous digital information that we had met to manage and survey—had never, in fact, been shared or were even commonly legible at all? and what evidentiary traces of this time would remain, in a political scene of post-truth posturing, the devaluation of expert knowledge, and the willingness of our new authorities—soon to become as evident on federal websites as in press conferences and cable news punditry—to revise and resubmit the historical record? the weeks and months that followed, for dlf and ndsa members, were filled with action. while the end of term web archive project sprang to its regular work of harvesting us federal domains at moments of presidential transition, reports that trump administration officials had ordered the removal of information on climate change and animal welfare from the websites of the environmental protection agency and us department of agriculture fostered a fear of the widespread deletion of scientific records, and prompted emergency ‘data rescue’ download parties. a new dlf government records transparency and accountability working group was launched. its members began watch-dogging preparations for the us census and highlighting house and senate bills meant to curtail scientific and demographic data creation; scrutinizing proposed changes to the records retention schedules of federal agencies and seeking ways to make the arcanum of their digital preservation workflows more accessible to the general public; and—amid new threats of the deportation of immigrants and the continued rise of violent nationalism—asking crucial questions about what electronic information should be made discoverable and accessible, for the protection of vulnerable persons. the social sciences research council convened a meeting on challenges to the digital preservation of documents of particular value to historians, economists, cultural anthropologists, and other social scientists, and the pegi project—focusing on the preservation of electronic government information—commissioned a wide-ranging report on at-risk, born-digital information meant to be held by us federal depository libraries and other cultural memory institutions for long-term public access and use. over time, reflective, pedagogical, and awareness-raising projects like endangered data week emerged, ties among the ndsa and international organizations like the uk-based digital preservation coalition were strengthened, and conversations on college campuses (fueled by the cambridge analytica scandal and the work of scholars of race, technology, and social media like safiya noble and siva vaidhyanathan) turned more squarely to data ethics and algorithmic literacy. frenetic data rescue parties gave over to the more measured advocacy and storytelling approach of the data refuge movement. and in the uk, an ahrc-funded ‘digital library futures’ project led by paul gooding and melissa terras (the seed of this edited collection) offered a golden opportunity to reflect—in the light of altered global understandings of the preservation and access challenges surrounding digital information—on the parliamentary legal deposit libraries (non print works) regulations of , which extended collecting practices dating to the early modern period to new media formats beyond the book. you hold in your hands (or view on your screens, or listen to through e-readers, or encounter in some other way i can’t yet foresee) an important and timely volume. it is well balanced between reflection-and-outlook and practice-and-method in what our editors call the ‘contested space’ of e-legal deposit—taking on the international and very long-term consequences of our present-day conception, regulation, assembly, positioning, and use of library-held digital collections. in other words, the essays assembled here cross space and time. the editors take a necessarily global view in bringing together a broad array of national approaches to the legal deposit of materials that already circulate in world-wide networks. and while the authors they’ve invited to contribute certainly take a long view of digital information, they also frequently address, head-on, the ways that electronic legal deposit forces our attention not just on posterity, but on the here-and-now of what media consumption means and how it works in the digital age. rather than asking us to rest our imaginations on a far-future prospect in which reading is conducted as it ever was in print (was any such act, as jerome mcgann would ask, self-identical?), the authors of these essays, collectively, assert that the kaleidoscopic mediations of e-legal deposit show us we’ve never really known what reading is.  the best thinkers on libraries question the very assumptions that our memory institutions rest upon, while elevating and honoring both their promise and the centuries of labor and careful (if not always disinterested or benign) intent that have made them what they are. melissa terras and paul gooding are among the best, and the perspectives they have assembled here—from publishers, eminent librarians and archivists, technologists, organizers, and scholars—make this edited collection an essential contribution to the literature on digital preservation. it is a necessary book that grapples with legal, practical, technical, and conceptual problems: with the distinctive visions and values of libraries; with the necessarily concomitant development of policies and platforms; and even with the very nature of our documentary heritage, at a moment when print-era logics break down. what i most appreciate is that this book—like the notion of e-legal deposit itself—calls for careful consideration of both present-day services and research possibilities not yet dreamt of. in this, it serves the true mission of legal deposit libraries: to be a stable bridge between a past that is perpetually constructed by our acts of preservation and erasure—and the many futures we may mediate but can barely imagine. posted in higher ed, infrastructure a pledge: self-examination and concrete action in the jmu libraries posted on june june by bethany nowviskie “the beauty of anti-racism is that you don’t have to pretend to be free of racism to be an anti-racist. anti-racism is the commitment to fight racism wherever you find it, including in yourself. and it’s the only way forward.” — ijeoma oluo, author of so you want to talk about race. black lives matter. too long have we allowed acts of racism and deeply ingrained, institutionalized forces of white supremacy to devalue, endanger, and grievously harm black people and members of other minoritized and marginalized groups. state-sanctioned violence and racial terror exist alongside slower and more deep-seated forces of inequality, anti-blackness, colonization, militarization, class warfare, and oppression. as members of the jmu libraries dean’s council and council on diversity, equity, and inclusion, we acknowledge these forces to be both national and local, shaping the daily lived experiences of our students, faculty, staff, and community members. as a blended library and educational technology organization operating within a pwi, the jmu libraries both participates in and is damaged by the whiteness and privilege of our institutions and fields. supporting the james madison university community through a global pandemic has helped us see imbalances, biases, and fault lines of inequality more clearly. we pledge self-examination and concrete action. libraries and educational technology organizations hold power, and can share or even cede it. as we strive to create welcoming spaces and services for all members of our community, we assert the fundamental non-neutrality of libraries and the necessity of taking visible and real action against the forces of racism and oppression that affect bipoc students, faculty, staff, and community members. specifically, and in order to “fight racism wherever [we] find it, including in [ourselves],” we commit to: listen to bipoc and student voices, recognizing that they have long spoken on these issues and have too often gone unheard. educate ourselves and ask questions of all the work we do. (“to what end? to whose benefit? whose comfort is centered? who has most agency and voice? who is silenced, ignored, or harmed? who is elevated, honored, and made to feel safe? who can experience and express joy?”)  set public and increasingly measurable goals related to diversity, equity, inclusion, and anti-racism, so that we may be held accountable. continue to examine, revise, and augment our collections, services, policies, spending patterns, and commitments, in order to institutionalize better practices and create offerings with enduring impact. learn from, and do better by, our own colleagues. we are a predominantly white organization and it is likely that we will make mistakes as we try to live up to this pledge. when that happens, we will do the work to learn and rectify. we will apologize, examine our actions and embedded power structures, attempt to mitigate any harm caused by our actions, and we will do better. continue reading “a pledge: self-examination and concrete action in the jmu libraries” posted in higher ed change us, too posted on june may by bethany nowviskie [the following is a brief talk i gave at the opening plenary of rbms , a meeting of the rare books and manuscripts section of the acrl/ala. this year’s theme was “response and responsibility: special collections and climate change,” and my co-panelists were frances beinecke of the national resources defense council and brenda ekwurzel of the union of concerned scientists. many thanks to conference chairs ben goldman and kate hutchens, session chair melissa hubbard, and outgoing rbms chair shannon supple. the talk draws together some of my past writings, all of which are linked to and freely available. images in my slide deck, as here, were by catherine nelson.] six years ago, i began writing about cultural heritage and cultural memory in the context of our ongoing climate disaster. starting to write and talk publicly was a frank attempt to assuage my terror and my grief—my personal grief at past and coming losses in the natural world, and the sense of terror growing inside me, both at the long-term future of the digital and physical collections in my charge, and at the unplanned-for environmental hardships and accelerating social unrest my two young children, then six and nine years old, would one day face. i latched, as people trained as scholars sometimes do, onto a set of rich and varied theoretical frameworks. these were developed by others grappling with the exact same existential dread: some quite recent, some going back to the s, the s, even the s—demonstrating, for me, not just the continuity of scientific agreement on the facts of climate change and the need for collective action (as my co-panelists have demonstrated), but scholarly and artistic agreement on the generative value of responses from what would become the environmental humanities and from practices i might call green speculative design. the concepts and theories i lighted on, however, served another function. they allowed me simultaneously to elevate and to sublimate many of my hardest-hitting feelings. in other words, i put my fears into a linguistic machine labeled “the anthropocene”—engineered to extract angst and allow me to crank out historicized, lyrical melancholy on the other end. since then i’ve also become concerned that, alongside and through the explicit, theoretical frameworks i found in the literature, i leaned unconsciously—as cis-gender white women and other members of dominant groups almost inevitably do—on implicit frameworks of white supremacy, on my gender privilege, and on the settler ideologies that got us here in the first place, all of which uphold and support the kind of emotional and fundamentally self-centered response i was first disposed to make. i see more clearly now that none of this is about my own relatively vastly privileged children and well-tended collections—except insofar as both of them exist within broader networks and collectives of care, as one achingly beloved and all-too-transitory part. please don’t misunderstand me: it remains absolutely vital that we honor our attachments, and acknowledge the complexity and deep reality of our emotional responses to living through the sixth great mass extinction of life on this planet—vital to compassionate teaching and leadership, to responsible stewardship, and to defining value systems that help us become more humane in the face of problems of inhuman scale. grappling with our emotions as librarians and archivists (and as curators, conservators, collectors, community organizers, scholars, and scientists) will be a major part of the work of this conference. it is also vital to doing work that appreciates its own inner standing point, and uses its positionality to promote understanding and effect change. but i’ve felt my own orientation changing. for me, all of this is, every day, less and less about my feelings on special collections and climate change—except to the degree that those feelings drive me toward actions that have systemic impact and are consonant with a set of values we may share. so this is a brief talk that will try to walk you (for what it’s worth) along the intellectual path i’ve taken over the past six years—in the space of about sixteen minutes. continue reading “change us, too” posted in design, infrastructuretagged embodied from the grass roots posted on march june by bethany nowviskie [this is a cleaned-up version of the text from which i spoke at the conference of research libraries uk, held at the wellcome collection in london last week. i’d like to thank my wonderful hosts for an opportunity to reflect on my time at dlf. as i said to the crowd, i hope the talk offers some useful—or at least productively vexing—ideas.] at a meeting in which the status of libraries as “neutral spaces” has been asserted and lauded, i feel obligated to confess: i’m not a believer in dispassionate and disinterested neutrality—not for human beings nor for the institutions that we continually reinforce or reinvent, based on our interactions in and through them. my training as a humanities scholar has shown me all the ways that it is in fact impossible for us to step wholly out of our multiple, layered, subjective positions, interpretive frameworks, and embodied existence. it has also taught me the dangers of assuming—no matter how noble our intentions—that socially constructed institutions might likewise escape their historical and contemporary positioning, and somehow operate as neutral actors in neutral space. happily, we don’t need neutrality to move constructively from independent points of view to shared understandings and collective action. there are models for this. the ones i will focus on today are broadly “dh-adjacent,” and they depend, sometimes uncomfortably, on the vulnerability, subjectivity, and autonomy of the people who engage with them—foregrounding the ways that individual professional roles intersect with personal lives as they come together around shared missions and goals. and as i discuss them, please note that i’ll be referring to the digital humanities and to digital librarianship somewhat loosely—in their cultural lineaments—speaking to the diffuse and socially constructed way both are practiced on the ground. in particular, i’ll reference a dh that is (for my purposes today) relatively unconcerned with technologies, methods, and objects of study. it’s my hope that shifting our focus—after much fruitful discussion, this week, of concrete research support—to a digital humanities that can also be understood as organizational, positional, and intersubjective might prompt some structural attunement to new ways of working in libraries. and i do this here, at a consortial gathering of “the most significant research libraries in the uk and ireland,” because i think that self-consciously expanding our attention in library leadership from the pragmatic provision of data, platforms, skills-teaching, and research support for dh, outward to its larger organizational frame is one way of cracking open serious and opportune contributions by people who would not consider themselves digital humanists at all. this likely includes many of you, your colleagues in university administration across areas and functions, and most members of your libraries’ personnel. such a change in focus invites all of us to be attentive to the deeper and fundamentally different kinds of engagement and transformation we might foster through dh as a vector and perhaps with only simple re-inflections of the resources we already devote to the field. it could also open our organizations up to illuminating partnerships with communities of practice who frankly don’t give a fig about academic disciplinary labels or whether they are or are not “doing dh.” i also speak to library leaders because my call is not for work to be done by individual scholars as researchers and teachers alone, nor even by small teams of librarians laboring in support of the research and cultural heritage enterprise—but rather by our fully-engaged institutions as altered structures of power. continue reading “from the grass roots” posted in administrivia, higher edtagged community-archives, digital humanities, libraries, politics posts navigation … next recent travel/talks april , : mcleod memorial lecture, wustl, on “cultural memory and the peri-pandemic library” march -april : speaking/travel hiatus during the pandemic february , : featured talk, aaad : “black temporalities: past, present, and future” july – january : speaking/travel hiatus while starting my new position at james madison university june , : tensions of europe keynote on machine learning & historical understanding, luxembourg june , : rmbs opening plenary on climate change & libraries/archives, baltimore june - , : teaching rare book school in philadelphia: “community archives and digital cultural memory” march , : rluk keynote on dh at the grassroots, london themes themesselect category administrivia design documents geospatial higher ed infrastructure past lives soft circuits & code swinburne twittering unfiltered archives archives select month may october june june march january june april march february november october april february november october may march february november july may february january october september august may january november october june april march january november october september june may april january december october september june april march january december october july june may recent posts cultural memory and the peri-pandemic library foreword (to the past) a pledge: self-examination and concrete action in the jmu libraries change us, too from the grass roots how the light gets in reconstitute the world spectra for speculative knowledge design we raise our voices iv. coda: speculative computing ( ) inauguration day open invitations speculative collections alternate futures/usable pasts oldies but goodies digital humanities in the anthropocene asking for it toward a new deal resistance in the materials too small to fail reality bytes lazy consensus a skunk in the library why, oh why, cc-by? what do girls dig? standard disclaimer this site and its contents are my responsibility alone, and may not reflect the opinions of my employer, colleagues, students, children, or imaginary friends. yours everything here is free to use under a creative commons attribution . international license. twitter linkedin github flickr instagram powered by miniva wordpress theme bibliographic wilderness skip to content bibliographic wilderness menu about contact logging uri query params with lograge the lograge gem for taming rails logs by default will lot the path component of the uri, but leave out the query string/query params. for instance, perhaps you have a url to your app /search?q=libraries. lograge will log something like: method=get path=/search format=html… the q=libraries part is completely left out of the log. i kinda want that part, it’s important. the lograge readme provides instructions for “logging request parameters”, by way of the params hash. i’m going to modify them a bit slightly to: use the more recent custom_payload config instead of custom_options. (i’m not certain why there are both, but i think mostly for legacy reasons and newer custom_payload? is what you should read for?) if we just put params in there, then a bunch of ugly "foo"} ok. the params hash isn’t exactly the same as the query string, it can include things not in the url query string (like controller and action, that we have to strip above, among others), and it can in some cases omit things that are in the query string. it just depends on your routing and other configuration and logic. the params hash itself is what default rails logs… but what if we just log the actual url query string instead? benefits: it’s easier to search the logs for actually an exact specific known url (which can get more complicated like /search?q=foo&range% byear_facet_isim% d% bbegin% d= &source=foo or something). which is something i sometimes want to do, say i got a url reported from an error tracking service and now i want to find that exact line in the log. i actually like having the exact actual url (well, starting from path) in the logs. it’s a lot simpler, we don’t need to filter out controller/action/format/id etc. it’s actually a bit more concise? and part of what i’m dealing with in general using lograge is trying to reduce my bytes of logfile for papertrail! drawbacks? if you had some kind of structured log search (i don’t at present, but i guess could with papertrail features by switching to json format?), it might be easier to do something like “find a /search with q=foo and source=ef without worrying about other params) to the extent that params hash can include things not in the actual url, is that important to log like that? ….? curious what other people think… am i crazy for wanting the actual url in there, not the params hash? at any rate, it’s pretty easy to do. note we use filtered_path rather than fullpath to again take account of rails parameter filtering, and thanks again /u/ezekg: config.lograge.custom_payload do |controller| { path: controller.request.filtered_path } end this is actually overwriting the default path to be one that has the query string too: method=get path=/search?q=libraries format=html ... you could of course add a different key fullpath instead, if you wanted to keep path as it is, perhaps for easier collation in some kind of log analyzing system that wants to group things by same path invariant of query string. i’m gonna try this out! meanwhile, on lograge… as long as we’re talking about lograge…. based on commit history, history of issues and pull requests… the fact that ci isn’t currently running (travis.org grr) and doesn’t even try to test on rails . + (although lograge seems to work fine)… one might worry that lograge is currently un/under-maintained…. no comment on a gh issue filed in may asking about project status. it still seems to be one of the more popular solutions to trying to tame rails kind of out of control logs. it’s mentioned for instance in docs from papertrail and honeybadger, and many many other blog posts. what will it’s future be? looking around for other possibilties, i found semantic_logger (rails_semantic_logger). it’s got similar features. it seems to be much more maintained. it’s got a respectable number of github stars, although not nearly as many as lograge, and it’s not featured in blogs and third-party platform docs nearly as much. it’s also a bit more sophisticated and featureful. for better or worse. for instance mainly i’m thinking of how it tries to improve app performance by moving logging to a background thread. this is neat… and also can lead to a whole new class of bug, mysterious warning, or configuration burden. for now i’m sticking to the more popular lograge, but i wish it had ci up that was testing with rails . , at least! incidentally, trying to get rails to log more compactly like both lograge and rails_semantic_logger do… is somewhat more complicated than you might expect, as demonstrated by the code in both projects that does it! especially semantic_logger is hundreds of lines of somewhat baroque code split accross several files. a refactor of logging around rails (i think?) to use activesupport::logsubscriber made it possible to customize rails logging like this (although i think both lograge and rails_semantic_logger still do some monkey-patching too!), but in the end didn’t make it all that easy or obvious or future-proof. this may discourage too many other alternatives for the initial primary use case of both lograge and rails_semantic_logger — turn a rails action into one log line, with a structured format. jrochkind general leave a comment august , august , notes on cloudfront in front of rails assets on heroku, with cors heroku really recommends using a cdn in front of your rails app static assets — which, unlike in non-heroku circumstances where a web server like nginx might be taking care of it, otherwise on heroku static assets will be served directly by your rails app, consuming limited/expensive dyno resources. after evaluating a variety of options (including some heroku add-ons), i decided aws cloudfront made the most sense for us — simple enough, cheap, and we are already using other direct aws services (including s and ses). while heroku has an article on using cloudfront, which even covers rails specifically, and even cors issues specifically, i found it a bit too vague to get me all the way there. and while there are lots of blog posts you can find on this topic, i found many of them outdated (rails has introduced new api; cloudfront has also changed it’s configuration options!), or otherwise spotty/thin. so while i’m not an expert on this stuff, i’m going to tell you what i was able to discover, and what i did to set up cloudfront as a cdn in front of rails static assets running on heroku — although there’s really nothing at all specific to heroku here, if you have any other context where rails is directly serving assets in production. first how i set up rails, then cloudfront, then some notes and concerns. btw, you might not need to care about cors here, but one reason you might is if you are serving any fonts (including font-awesome or other icon fonts!) from rails static assets. rails setup in config/environments/production.rb # set heroku config var rails_asset_host to your cloudfront # hostname, will look like `xxxxxxxx.cloudfront.net` config.asset_host = env['rails_asset_host'] config.public_file_server.headers = { # cors: 'access-control-allow-origin' => "*", # tell cloudfront to cache a long time: 'cache-control' => 'public, max-age= ' } cloudfront setup i changed some things from default. the only one that absolutely necessary — if you want cors to work — seemed to be changing allowed http methods to include options. click on “create distribution”. all defaults except: origin domain name: your heroku app host like app-name.herokuapp.com origin protocol policy: switch to “https only”. seems like a good idea to ensure secure traffic between cloudfront and origin, no? allowed http methods: switch to get, head, options. in my experimentation, necessary for cors from a browser to work — which aws docs also suggest. cached http methods: click “options” too now that we’re allowing it, i don’t see any reason not to? compress objects automatically: yes sprockets is creating .gz versions of all your assets, but they’re going to be completely ignored in a cloudfront setup either way. ☹️ (is there a way to tell sprockets to stop doing it? who knows not me, it’s so hard to figure out how to reliably talk to sprockets). but we can get what it was trying to do by having cloudfront encrypt stuff for us, seems like a good idea, google pagespeed will like it, etc? i noticed by experimentation that cloudfront will compress css and js (sometimes with brotli sometimes gz, even with the same browser, don’t know how it decides, don’t care), but is smart enough not to bother trying to compress a .jpg or .png (which already has internal compression). comment field: if there’s a way to edit it after you create the distribution, i haven’t found it, so pick a good one! notes on cors aws docs here and here suggest for cors support you also need to configure the cloudfront distribution to forward additional headers — origin, and possibly access-control-request-headers and access-control-request-method. which you can do by setting up a custom “cache policy”. or maybe instead by by setting the “origin request policy”. or maybe instead by setting custom cache header settings differently using the use legacy cache settings option. it got confusing — and none of these settings seemed to be necessary to me for cors to be working fine, nor could i see any of these settings making any difference in cloudfront behavior or what headers were included in responses. maybe they would matter more if i were trying to use a more specific access-control-allow-origin than just setting it to *? but about that…. if you set access-control-allow-origin to a single host, mdn docs say you have to also return a vary: origin header. easy enough to add that to your rails config.public_file_server.headers. but i couldn’t get cloudfront to forward/return this vary header with it’s responses. trying all manner of cache policy settings, referring to aws’s quite confusing documentation on the vary header in cloudfront and trying to do what it said — couldn’t get it to happen. and what if you actually need more than one allowed origin? per spec access-control-allow-origin as again explained by mdn, you can’t just include more than one in the header, the header is only allowed one: ” if the server supports clients from multiple origins, it must return the origin for the specific client making the request.” and you can’t do that with rails static/global config.public_file_server.headers, we’d need to use and setup rack-cors instead, or something else. so i just said, eh, * is probably just fine. i don’t think it actually involves any security issues for rails static assets to do this? i think it’s probably what everyone else is doing? the only setup i needed for this to work was setting cloudfront to allow options http method, and setting rails config.public_file_server.headers to include 'cache-control' => 'public, max-age= '. notes on cache-control max-age a lot of the existing guides don’t have you setting config.public_file_server.headers to include 'cache-control' => 'public, max-age= '. but without this, will cloudfront actually be caching at all? if with every single request to cloudfront, cloudfront makes a request to the rails app for the asset and just proxies it — we’re not really getting much of the point of using cloudfront in the first place, to avoid the traffic to our app! well, it turns out yes, cloudfront will cache anyway. maybe because of the cloudfront default ttl setting? my default ttl was left at the cloudfront default, seconds (one day). so i’d think that maybe cloudfront would be caching resources for a day when i’m not supplying any cache-control or expires headers? in my observation, it was actually caching for less than this though. maybe an hour? (want to know if it’s caching or not? look at headers returned by cloudfront. one easy way to do this? curl -ixget https://whatever.cloudfront.net/my/asset.jpg, you’ll see a header either x-cache: miss from cloudfront or x-cache: hit from cloudfront). of course, cloudfront doesn’t promise to cache for as long as it’s allowed to, it can evict things for it’s own reasons/policies before then, so maybe that’s all that’s going on. still, rails assets are fingerprinted, so they are cacheable forever, so why not tell cloudfront that? maybe more importantly, if rails isn’t returning a cache-cobntrol header, then cloudfront isn’t either to actual user-agents, which means they won’t know they can cache the response in their own caches, and they’ll keep requesting/checking it on every reload too, which is not great for your far too large css and js application files! so, i think it’s probably a great idea to set the far-future cache-control header with config.public_file_server.headers as i’ve done above. we tell cloudfront it can cache for the max-allowed-by-spec one year, and this also (i checked) gets cloudfront to forward the header on to user-agents who will also know they can cache. note on limiting cloudfront distribution to just static assets? the cloudfront distribution created above will actually proxy/cache our entire rails app, you could access dynamic actions through it too. that’s not what we intend it for, our app won’t generate any urls to it that way, but someone could. is that a problem? i don’t know? some blog posts try to suggest limiting it only being willing to proxy/cache static assets instead, but this is actually a pain to do for a couple reasons: cloudfront has changed their configuration for “path patterns” since many blog posts were written (unless you are using “legacy cache settings” options), such that i’m confused about how to do it at all, if there’s a way to get a distribution to stop caching/proxying/serving anything but a given path pattern anymore? modern rails with webpacker has static assets at both /assets and /packs, so you’d need two path patterns, making it even more confusing. (why rails why? why aren’t packs just at public/assets/packs so all static assets are still under /assets?) i just gave up on figuring this out and figured it isn’t really a problem that cloudfront is willing to proxy/cache/serve things i am not intending for it? is it? i hope? note on rails asset_path helper and asset_host you may have realized that rails has both asset_path and asset_url helpers for linking to an asset. (and similar helpers with dashes instead of underscores in sass, and probably different implementations, via sass-rails) normally asset_path returns a relative url without a host, and asset_url returns a url with a hostname in it. since using an external asset_host requires we include the host with all urls for assets to properly target cdn… you might think you have to stop using asset_path anywhere and just use asset_url… you would be wrong. it turns out if config.asset_host is set, asset_path starts including the host too. so everything is fine using asset_path. not sure if at that point it’s a synonym for asset_url? i think not entirely, because i think in fact once i set config.asset_host, some of my uses of asset_url actually started erroring and failing tests? and i had to actually only use asset_path? in ways i don’t really understand what’s going on and can’t explain it? ah, rails. jrochkind general leave a comment june , june , activesupport::cache via activerecord (note to self) there are a variety of things written to use flexible back-end key/value datastores via the activesupport::cache api. for instance, say, activejob-status. i have sometimes in the past wanted to be able to use such things storing the data in an rdbms, say vai activerecord. make a table for it. sure, this won’t be nearly as fast or “scalable” as, say, redis, but for so many applications it’s just fine. and i often avoid using a feature at all if it is going to require to me to add another service (like another redis instance). so i’ve considered writing an activesupport::cache adapter for activerecord, but never really gotten around to it, so i keep avoiding using things i’d be trying out if i had it…. well, today i discovered the ruby gem that’s a key/value store swiss army knife, moneta. look, it has an activesupport::cache adapter so you can use any moneta-supported store as an activesupport::cache api. and then if you want to use an rdbms as your moneta-supported store, you can do it through activerecord or sequel. great, i don’t have to write the adapter after all, it’s already been done! assuming it works out okay, which i haven’t actually checked in practice yet. writing this in part as a note-to-self so next time i have an itch that can be scratched this way, i remember moneta is there — to at least explore further. not sure where to find the docs, but here’s the source for activerecord moneta adapter. it looks like i can create different caches that use different tables, which is the first thing i thought to ensure. the second thing i thought to look for — can it handle expiration, and purging expired keys? unclear, i can’t find it. maybe i could pr it if needed. and hey, if for some reason you want an activesupport::cache backed by pstore or berkelydb (don’t do it!), or cassandara (you got me, no idea?), moneta has you too. jrochkind general leave a comment june , june , heroku release phase, rails db:migrate, and command failure if you use capistrano to deploy a rails app, it will typically run a rails db:migrate with every deploy, to apply any database schema changes. if you are deploying to heroku you might want to do the same thing. the heroku “release phase” feature makes this possible. (introduced in , the release phase feature is one of heroku’s more recent major features, as heroku dev has seemed to really stabilize and/or stagnate). the release phase docs mention “running database schema migrations” as a use case, and there are a few (( ), ( ), ( )) blog posts on the web suggesting doing exactly that with rails. basically as simple as adding release: bundle exec rake db:migrate to your procfile. while some of the blog posts do remind you that “if the release phase fails the app will not be deployed”, i have found the implications of this to be more confusing in practice than one would originally assume. particularly because on heroku changing a config var triggers a release; and it can be confusing to notice when such a release has failed. it pays to consider the details a bit so you understand what’s going on, and possibly consider somewhat more complicated release logic than simply calling out to rake db:migrate. ) what if a config var change makes your rails app unable to boot? i don’t know how unusual this is, but i actually had a real-world bug like this when in the process of setting up our heroku app. without confusing things with the details, we can simulate such a bug simply by putting this in, say, config/application.rb: if env['fail_to_boot'] raise "i am refusing to boot" end obviously my real bug was weirder, but the result was the same — with some settings of one or more heroku configuration variables, the app would raise an exception during boot. and we hadn’t noticed this in testing, before deploying to heroku. now, on heroku, using cli or web dashboard, set the config var fail_to_boot to “true”. without a release phase, what happens? the release is successful! if you look at the release in the dashboard (“activity” tab) or heroku releases, it shows up as successful. which means heroku brings up new dynos and shuts down the previous ones, that’s what a release is. the app crashes when heroku tries to start it in the new dynos. the dynos will be in “crashed” state when looked at in heroku ps or dashboard. if a user tries to access the web app, they will get the generic heroku-level “could not start app” error screen (unless you’ve customized your heroku error screens, as usual). you can look in your heroku logs to see the error and stack trace that prevented app boot. downside: your app is down. upside: it is pretty obvious that your app is down, and (relatively) why. with a db:migrate release phase, what happens? the rails db:migrate rake task has a dependency on the rails :environment task, meaning it boots the rails app before executing. you just changed your config variable fail_to_boot: true such that the rails app can’t boot. changing the config variable triggered a release. as part of the release, the db:migrate release phase is run… which fails. the release is not succesful, it failed. you don’t get any immediate feedback to that effect in response to your heroku config:add command or on the dashboard gui in the “settings” tab. you may go about your business assuming it succeeded. if you look at the release in heroku releases or dashboard “activity” tab you will see it failed. you do get an email that it failed. maybe you notice it right away, or maybe you notice it later, and have to figure out “wait, which release failed? and what were the effects of that? should i be worried?” the effects are: the config variable appears changed in heroku’s dashboard or in response to heroku config:get etc. the old dynos without the config variable change are still running. they don’t have the change. if you open a one-off dyno, it will be using the old release, and have the old (eg) env[‘fail_to_boot’] value. any subsequent attempts at a releases will keep fail, so long as the app is in a state (based on teh current config variables) that it can’t boot. again, this really happened to me! it is a fairly confusing situation. upside: your app is actually still up, even though you broke it, the old release that is running is still running, that’s good? downside: it’s really confusing what happened. you might not notice at first. things remain in a messed up inconsistent and confusing state until you notice, figure out what’s going on, what release caused it, and how to fix it. it’s a bit terrifying that any config variable change could do this. but i guess most people don’t run into it like i did, since i haven’t seen it mentioned? ) a heroku pg:promote is a config variable change, that will create a release in which db:migrate release phase fails. heroku pg:promote is a command that will change which of multiple attached heroku postgreses are attached as the “primary” database, pointed to by the database_url config variable. for a typical app with only one database, you still might use pg:promote for a database upgrade process; for setting up or changing a postgres high-availability leader/follower; or, for what i was experimenting with it for, using heroku’s postgres-log-based rollback feature. i had assumed that pg:promote was a zero-downtime operation. but, in debugging it’s interaction with my release phase, i noticed that pg:promote actually creates two heroku releases. first it creates a release labelled detach database , in which there is no database_url configuration variable at all. then it creates another release labelled attach database in which the database_url configuration variable is defined to it’s new value. why does it do this instead of one release that just changes the database_url? i don’t know. my app (like most rails and probably other apps) can’t actually function without database_url set, so if that first release ever actually runs, it will just error out. does this mean there’s an instant with a “bad” release deployed, that pg:promote isn’t actually zero-downtime? i am not sure, it doens’t seem right (i did file a heroku support ticket asking….). but under normal circumstances, either it’s not a problem, or most people(?) don’t notice. but what if you have a db:migrate release phase? when it tries to do release ( ) above, that release will fail. because it tries to run db:migrate, and it can’t do that without a database_url set, so it raises, the release phase exits in an error condition, the release fails. actually what happens is without database_url set, the rails app will assume a postgres url in a “default” location, try to connect to, and fail, with an error message (hello googlers?), like: activerecord::connectionnotestablished: could not connect to server: no such file or directory is the server running locally and accepting connections on unix domain socket "/var/run/postgresql/.s.pgsql. "? now, release ( ) is coming down the pike seconds later, this is actually fine, and will be zero outage. we had a release that failed (so never was deployed), and seconds later the next correct release succeeds. great! the only problem is that we got an email notifying us that release failed, and it’s also visible as failing in the heroku release list, etc. a “background” (not in response to a git push or other code push to heroku) release failing is already a confusing situation — a”false positives” that actually mean “nothing unexpected or problematic happened, just ignore this and carry on.” is… really not something i want. (i call this the “error notification crying wolf”, right? i try to make sure my error notifications never do it, because it takes your time away from flow unecessarily, and/or makes it much harder to stay vigilant to real errors). now, there is a fairly simple solution to this particular problem. here’s what i did. i changed my heroku release phase from rake db:migrate to a custom rake task, say release: bundle exec rake my_custom_heroku_release_phase, defined like so: task :my_custom_heroku_release_phase do if env['database_url'] rake::task["db:migrate"].invoke else $stderr.puts "\n!!! warning, no env['database_url'], not running rake db:migrate as part of heroku release !!!\n\n" end end view raw custom.rake hosted with ❤ by github now that release ( ) above at least won’t fail, it has the same behavior as a “traditional” heroku app without a release phase. swallow-and-report all errors? when a release fails because a release phase has failed as result of a git push to heroku, that’s quite clear and fine! but the confusion of the “background” release failure, triggered by a config var change, is high enough that part of me wants to just rescue standarderror in there, and prevent a failed release phase from ever exiting with a failure code, so heroku will never use a db:migrate release phase to abort a release. just return the behavior to the pre-release-phase heroku behavior — you can put your app in a situation where it will be crashed and not work, but maybe that’s better not a mysterious inconsistent heroku app state that happens in the background and you find out about only through asynchronous email notifications from heroku that are difficult to understand/diagnose. it’s all much more obvious. on the other hand, if a db:migrate has failed not becuase of some unrelated boot process problem that is going to keep the app from launching too even if it were released, but simply because the db:migrate itself actually failed… you kind of want the release to fail? that’s good? keep the old release running, not a new release with code that expects a db migration that didn’t happen? so i’m not really sure. if you did want to rescue-swallow-and-notify, the custom rake task for your heroku release logic — instead of just telling heroku to run a standard thing like db:migrate on release — is certainly convenient. also, do you really always want to db:migrate anyway? what about db:schema:load? another alternative… if you are deploying an app with an empty database, standard rails convention is to run rails db:schema:load instead of db:migrate. the db:migrate will probably work anyway, but will be slower, and somewhat more error-prone. i guess this could come up on heroku with an initial deploy or (for some reason) a database that’s been nuked and restarted, or perhaps a heroku “review app”? (i don’t use those yet) stevenharman has a solution that actually checks the database, and runs the appropriate rails task depending on state, here in this gist. i’d probably do it as a rake task instead of a bash file if i were going to do that. i’m not doing it at all yet. note that stevenharman’s solution will actually catch a non-existing or non-connectable database and not try to run migrations… but it will print an error message and exit in that case, failing the release — meaning that you will get a failed release in the pg:promote case mentioned above! jrochkind general leave a comment june , june , code that lasts: sustainable and usable open source code a presentation i gave at online conference code lib , on monday march . i have realized that the open source projects i am most proud of are a few that have existed for years now, increasing in popularity, with very little maintenance required. including traject and bento_search. while community aspects matter for open source sustainability, the task gets so much easier when the code requires less effort to keep alive, for maintainers and utilizers. using these projects as examples, can we as developers identify what makes code “inexpensive” to use and maintain over the long haul with little “churn”, and how to do that? slides on google docs rough transcript (really the script i wrote for myself) hi, i’m jonathan rochkind, and this is “code that lasts: sustainable and usable open source code” so, who am i? i have been developing open source library software since , mainly in ruby and rails.  over that time, i have participated in a variety open source projects meant to be used by multiple institutions, and i’ve often seen us having challenges with long-term maintenance sustainability and usability of our software. this includes in projects i have been instrumental in creating myself, we’ve all been there!  we’re used to thinking of this problem in terms of needing more maintainers. but let’s first think more about what the situation looks like, before we assume what causes it. in addition to features  or changes people want not getting done, it also can look like, for instance: being stuck using out-of-date dependencies like old, even end-of-lifed, versions of rails or ruby. a reduction in software “polish” over time.  what do i mean by “polish”? engineer richard schneeman writes: [quote] “when we say something is “polished” it means that it is free from sharp edges, even the small ones. i view polished software to be ones that are mostly free from frustration. they do what you expect them to and are consistent.”  i have noticed that software can start out very well polished, but over time lose that polish.  this usually goes along with decreasing “cohesion” in software over time, a feeling like that different parts of the software start to no longer tell the developer a consistent story together.  while there can be an element of truth in needing more maintainers in some cases – zero maintainers is obviously too few — there are also ways that increasing the number of committers or maintainers can result in diminishing returns and additional challenges. one of the theses of fred brooks famous book “the mythical man-month” is sometimes called ”brooks law”:  “under certain conditions, an incremental person when added to a project makes the project take more, not less time.” why? one of the main reasons brooks discusses is the the additional time taken for communication and coordination between more people – with every person you add, the number of connections between people goes up combinatorily.  that may explain the phenomenon we sometimes see with so-called “design  by committee” where “too many cooks in the kitchen” can produce inconsistency or excessive complexity. cohesion and polish require a unified design vision— that’s  not incompatible with increasing numbers of maintainers, but it does make it more challenging because it takes more time to get everyone on the same page, and iterate while maintaining a unifying vision.  (there’s also more to be said here about the difference between just a bunch of committers committing pr’s, and the maintainers role of maintaining historical context and design vision for how all the parts fit together.) instead of assuming adding more committers or maintainers is the solution, can there instead be ways to reduce the amount of maintenance required? i started thinking about this when i noticed a couple projects of mine which had become more widely successful than i had any right  to expect, considering how little maintainance was being put into them.  bento_search is a toolkit for searching different external search engines in a consistent way. it’s especially but not exclusively for displaying multiple search results in “bento box” style, which is what tito sierra from ncsu first called these little side by side search results.  i wrote bento_search  for use at a former job in .  % of all commits to the project were made in .  % of all commits in or earlier. (i gave it a bit of attention for a contracting project in ). but bento_search has never gotten a lot of maintenance, i don’t use it anymore myself. it’s not in wide use, but i found  it kind of amazing, when i saw people giving me credit in conference presentations for the gem (thanks!), when i didn’t even know they were using it and i hadn’t been paying it any attention at all! it’s still used by a handful of institutions for whom it just works with little attention from maintainers. (the screenshot from cornell university libraries) traject is a marc-to-solr indexing tool written in ruby  (or, more generally, can be a general purpose extract-transform-load tool), that i wrote with bill dueber from the university of michigan in .  we hoped it would catch on in the blacklight community, but for the first couple years, it’s uptake was slow.  however, since then, it has come to be pretty popular in blacklight and samvera communities, and a few other library technologist uses.  you can see the spikes of commit activity in the graph for a . release in and a . release in – but for the most part at other times, nobody has really been spending much time on maintaining traject.   every once in a while a community member submits a minor pull request, and it’s usually me who reviews it. me and bill remain the only maintainers.  and yet traject just keeps plugging along, picking up adoption and working well for adopters.   so, this made me start thinking, based on what i’ve seen in my career, what are some of the things that might make open source projects both low-maintenance and successful in their adoption and ease-of-use for developers? one thing both of these projects did was take backwards compatibility very seriously.  the first step of step there is following “semantic versioning” a set of rules whose main point is that releases can’t include backwards incompatible changes unless they are a new major version, like going from .x to . .  this is important, but it’s not alone enough to minimize backwards incompatible changes that add maintenance burden to the ecosystem. if the real goal is preventing the pain of backwards incompatibility, we also need to limit the number of major version releases, and limit the number and scope of backwards breaking changes in each major release! the bento_search gem has only had one major release, it’s never had a . release, and it’s still backwards compatible to it’s initial release.  traject is on a .x release after years, but the major releases of traject have had extremely few backwards breaking changes, most people could upgrade through major versions changing very little or most often nothing in their projects.  so ok, sure, everyone wants to minimize backwards incompatibility, but that’s easy to say, how do you do it? well, it helps to have less code overall, that changes less often overall all  – ok, again, great, but how do you do that?  parsimony is a word in general english that means “the quality of economy or frugality in the use of resources.” in terms of software architecture, it means having as few as possible moving parts inside your code: fewer classes, types, components, entities, whatever: or most fundamentally, i like to think of it in terms of minimizing the concepts in the mental model a programmer needs to grasp how the code works and what parts do what. the goal of architecture design is, what is the smallest possible architecture we can create to make [quote] “simple things simple and complex things possible”, as computer scientist alan kay described the goal of software design.  we can see this in bento_search has very few internal architectural concepts.  the main thing bento_search does is provide a standard api for querying a search engine and representing results of a search. these are consistent across different searche engines,, with common metadata vocabulary for what results look like. this makes search engines  interchangeable to calling code.  and then it includes half a dozen or so search engine implementations for services i needed or wanted to evaluate when i wrote it.   this search engine api at the ruby level can be used all by itself even without the next part, the actual “bento style” which is a built-in support for displaying search engine results in a boxes on a page of your choice in a rails app, way to,  writing very little boilerplate code.   traject has an architecture which basically has just three parts at the top. there is a reader which sends objects into the pipeline.  there are some indexing rules which are transformation steps from source object to build an output hash object.  and then a writer which which translates the hash object to write to some store, such as solr. the reader, transformation steps, and writer are all independent and uncaring about each other, and can be mixed and matched.   that’s most of traject right there. it seems simple and obvious once you have it, but it can take a lot of work to end up with what’s simple and obvious in retrospect!  when designing code i’m often reminded of the apocryphal quote: “i would have written a shorter letter, but i did not have the time” and, to be fair, there’s a lot of complexity within that “indexing rules” step in traject, but it’s design was approached the same way. we have use cases about supporting configuration settings in a  file or on command line; or about allowing re-usable custom transformation logic – what’s the simplest possible architecture we can come up with to support those cases. ok, again, that sounds nice, but how do you do it? i don’t have a paint by numbers, but i can say that for both these projects i took some time – a few weeks even – at the beginning to work out these architectures, lots of diagraming, some prototyping i was prepared to throw out,  and in some cases “documentation-driven design” where i wrote some docs for code i hadn’t written yet. for traject it was invaluable to have bill dueber at university of michigan also interested in spending some design time up front, bouncing ideas back and forth with – to actually intentionally go through an architectural design phase before the implementation.  figuring out a good parsimonious architecture takes domain knowledge: what things your “industry” – other potential institutions — are going to want to do in this area, and specifically what developers are going to want to do with your tool.  we’re maybe used to thinking of “use cases” in terms of end-users, but it can be useful at the architectural design stage, to formalize this in terms of developer use cases. what is a developer going to want to do, how can i come up with a small number of software pieces she can use to assemble together to do those things. when we said “make simple things simple and complex things possible”, we can say domain analysis and use cases is identifying what things we’re going to put in either or neither of those categories.  the “simple thing” for bento_search , for instance is just “do a simple keyword search in a search engine, and display results, without having the calling code need to know anything about the specifics of that search engine.” another way to get a head-start on solid domain knowledge is to start with another tool you have experience with, that you want to create a replacement for. before traject, i and other users used a tool written in java called solrmarc —  i knew how we had used it, and where we had had roadblocks or things that we found harder or more complicated than we’d like, so i knew my goals were to make those things simpler. we’re used to hearing arguments about avoiding rewrites, but like most things in software engineering, there can be pitfalls on either either extreme. i was amused to notice, fred brooks in the previously mentioned mythical man month makes some arguments in both directions.  brooks famously warns about a “second-system effect”, the [quote] “tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence” – one reason to be cautious of a rewrite.  but brooks in the very same book also writes [quote] “in most projects, the first system built is barely usable….hence plan to throw one away; you will, anyhow.” it’s up to us figure out when we’re in which case. i personally think an application is more likely to be bitten by the “second-system effect” danger of a rewrite, while a shared re-usable library is more likely to benefit from a rewrite (in part because a reusable library is harder to change in place without disruption!).  we could sum up a lot of different princples as variations of “keep it small”.  both traject and bento_search are tools that developers can use to build something. bento_search just puts search results in a box on a page; the developer is responsible for the page and an overall app.  yes, this means that you have to be a ruby developer to use it. does this limit it’s audience? while we might aspire to make tools that even not-really-developers can just use out of the box, my experience has been that our open source attempts at shrinkwrapped “solutions” often end up still needing development expertise to successfully deploy.  keeping our tools simple and small and not trying to supply a complete app can actually leave more time for these developers to focus on meeting local needs, instead of fighting with a complicated frameworks that doesn’t do quite what they need. it also means we can limit interactions with any external dependencies. traject was developed for use with a blacklight project, but traject code does not refer to blacklight or even rails at all, which means new releases of blacklight or rails can’t possibly break traject.  bento_search , by doing one thing and not caring about the details of it’s host application, has kept working from rails . all the way up to current rails . with pretty much no changes needed except to the test suite setup.  sometimes when people try to have lots of small tools working together, it can turn into a nightmare where you get a pile of cascading software breakages every time one piece changes. keeping assumptions and couplings down is what lets us avoid this maintenance nightmare.  and another way of keeping it small is don’t be afraid to say “no” to features when you can’t figure out how to fit them in without serious harm to the parsimony of your architecture. your domain knowledge is what lets you take an educated guess as to what features are core to your audience and need to be accomodated, and which are edge cases and can be fulfilled by extension points, or sometimes not at all.  by extension points we mean we prefer opportunities for developer-users to write their own code which works with your tools, rather than trying to build less commonly needed features in as configurable features.  as an example, traject does include some built-in logic, but one of it’s extension point use cases is making sure it’s simple to add whatever transformation logic a developer-user wants, and have it look just as “built-in” as what came with traject. and since traject makes it easy to write your own reader or writer, it’s built-in readers and writers don’t need to include every possible feature –we plan for developers writing their own if they need something else.  looking at bento_search, it makes it easy to write your own search engine_adapter — that will be useable interchangeably with the built-in ones. also, bento_search provides a standard way to add custom search arguments specific to a particular adapter – these won’t be directly interchangeable with other adapters, but they are provided for in the architecture, and won’t break in future bento_search releases – it’s another form of extension point.  these extension points are the second half of “simple things simple, complex things possible.” – the complex things possible. planning for them is part of understanding your developer use-cases, and designing an architecture that can easily handle them. ideally, it takes no extra layers of abstraction to handle them, you are using the exact  architectural join points the out-of-the-box code is using, just supplying custom components.  so here’s an example of how these things worked out in practice with traject, pretty well i think. stanford ended up writing a package of extensions to traject called trajectplus, to take care of some features they needed that traject didn’t provide. commit history suggests it was written in , which was traject . days.   i can’t recall, but i’d guess they approached me with change requests to traject at that time and i put them off because i couldn’t figure out how to fit them in parsimoniously, or didn’t have time to figure it out.  but the fact that they were *able* to extend traject in this way i consider a validation of traject’s architecture, that they could make it do what they needed, without much coordination with me, and use it in many projects (i think beyond just stanford).  much of the . release of traject was “back-port”ing some features that trajectplus had implemented, including out-of-the-box support for xml sources. but i didn’t always do them with the same implementation or api as trajectplus – this is another example of being able to use a second go at it to figure out how to do something even more parsimoniously, sometimes figuring out small changes to traject’s architecture to support flexibility in the right dimensions.  when traject . came out – the trajectplus users didn’t necessarily want to retrofit all their code to the new traject way of doing it. but trajectplus could still be used with traject . with few or possibly no changes, doing things the old way, they weren’t forced to upgrade to the new way. this is a huge win for traject’s backwards compat – everyone was able to do what they needed to do, even taking separate paths, with relatively minimized maintenance work.  as i think about these things philosophically, one of my takeaways is that software engineering is still a craft – and software design is serious thing to be studied and engaged in. especially for shared libraries rather than local apps, it’s not always to be dismissed as so-called “bike-shedding”.  it’s worth it to take time to think about design, self-reflectively and with your peers, instead of just rushing to put our fires or deliver features, it will reduce maintenance costs and increase values over the long-term.  and i want to just briefly plug “kithe”, a project of mine which tries to be guided by these design goals to create a small focused toolkit for building digital collections applications in rails.  i could easily talk about all of this this another twenty minutes, but that’s our time! i’m always happy to talk more, find me on slack or irc or email.  this last slide has some sources mentioned in the talk. thanks for your time!  jrochkind general leave a comment march , march , product management in my career working in the academic sector, i have realized that one thing that is often missing from in-house software development is “product management.” but what does that mean exactly? you don’t know it’s missing if you don’t even realize it’s a thing and people can use different terms to mean different roles/responsibilities. basically, deciding what the software should do. this is not about colors on screen or margins (what our stakeholderes often enjoy micro-managing) — i’d consider those still the how of doing it, rather than the what to do. the what is often at a much higher level, about what features or components to develop at all. when done right, it is going to be based on both knowledge of the end-user’s needs and preferences (user research); but also knowledge of internal stakeholder’s desires and preferences (overall organiational strategy, but also just practically what is going to make the right people happy to keep us resourced). also knowledge of the local capacity, what pieces do we need to put in place to get these things developed. when done seriously, it will necessarily involve prioritization — there are many things we could possibly done, some subset of them we very well may do eventually, but which ones should we do now? my experience tells me it is a very big mistake to try to have a developer doing this kind of product management. not because a developer can’t have the right skillset to do them. but because having the same person leading development and product management is a mistake. the developer is too close to the development lense, and there’s just a clarification that happens when these roles are separate. my experience also tells me that it’s a mistake to have a committee doing these things, much as that is popular in the academic sector. because, well, just of course it is. but okay this is all still pretty abstract. things might become more clear if we get more specific about the actual tasks and work of this kind of product management role. i found damilola ajiboye blog post on “product manager vs product marketing manager vs product owner” very clear and helpful here. while it is written so as to distinguish between three different product management related roles, but ajiboye also acknowledges that in a smaller organization “a product manager is often tasked with the duty of these roles. regardless of if the responsibilities are to be done by one or two or three person, ajiboye’s post serves as a concise listing of the work to be done in managing a product — deciding the what of the product, in an ongoing iterative and collaborative manner, so that developers and designers can get to the how and to implementation. i recommend reading the whole article, and i’ll excerpt much of it here, slightly rearranged. the product manager these individuals are often referred to as mini ceos of a product. they conduct customer surveys to figure out the customer’s pain and build solutions to address it. the pm also prioritizes what features are to be built next and prepares and manages a cohesive and digital product roadmap and strategy. the product manager will interface with the users through user interviews/feedback surveys or other means to hear directly from the users. they will come up with hypotheses alongside the team and validate them through prototyping and user testing. they will then create a strategy on the feature and align the team and stakeholders around it. the pm who is also the chief custodian of the entire product roadmap will, therefore, be tasked with the duty of prioritization. before going ahead to carry out research and strategy, they will have to convince the stakeholders if it is a good choice to build the feature in context at that particular time or wait a bit longer based on the content of the roadmap. the product marketing manager the pmm communicates vital product value — the “why”, “what” and “when” of a product to intending buyers. he manages the go-to-market strategy/roadmap and also oversees the pricing model of the product. the primary goal of a pmm is to create demand for the products through effective messaging and marketing programs so that the product has a shorter sales cycle and higher revenue. the product marketing manager is tasked with market feasibility and discovering if the features being built align with the company’s sales and revenue plan for the period. they also make research on how sought-after the feature is being anticipated and how it will impact the budget. they communicate the values of the feature; the why, what, and when to potential buyers — in this case users in countries with poor internet connection. [while expressed in terms of a for-profit enterprise selling something, i think it’s not hard to translate this to a non-profit or academic environment. you still have an audience whose uptake you need to be succesful, whether internal or external. — jrochkind ] the product owner a product owner (po) maximizes the value of a product through the creation and management of the product backlog, creation of user stories for the development team. the product owner is the customer’s representative to the development team. he addresses customer’s pain points by managing and prioritizing a visible product backlog. the po is the first point of call when the development team needs clarity about interpreting a product feature to be implemented. the product owner will first have to prioritize the backlog to see if there are no important tasks to be executed and if this new feature is worth leaving whatever is being built currently. they will also consider the development effort required to build the feature i.e the time, tools, and skill set that will be required. they will be the one to tell if the expertise of the current developers is enough or if more engineers or designers are needed to be able to deliver at the scheduled time. the product owner is also armed with the task of interpreting the product/feature requirements for the development team. they serve as the interface between the stakeholders and the development team. when you have someone(s) doing these roles well, it ensures that the development team is actually spending time on things that meet user and business needs. i have found that it makes things so much less stressful and more rewarding for everyone involved. when you have nobody doing these roles, or someone doing it in a cursory or un-intentional way not recognized as part of their core job responsibilities, or have a lead developer trying to do it on top of develvopment, i find it leads to feelings of: spinning wheels, everything-is-an-emergency, lack of appreciation, miscommunication and lack of shared understanding between stakeholders and developers, general burnout and dissatisfaction — and at the root, a product that is not meeting user or business needs well, leading to these inter-personal and personal problems. jrochkind general leave a comment february , rails auto-scaling on heroku we are investigating moving our medium-small-ish rails app to heroku. we looked at both the rails autoscale add-on available on heroku marketplace, and the hirefire.io service which is not listed on heroku marketplace and i almost didn’t realize it existed. i guess hirefire.io doesn’t have any kind of a partnership with heroku, but still uses the heroku api to provide an autoscale service. hirefire.io ended up looking more fully-featured and lesser priced than rails autoscale; so the main service of this post is just trying to increase visibility of hirefire.io and therefore competition in the field, which benefits us consumers. background: interest in auto-scaling rails background jobs at first i didn’t realize there was such a thing as “auto-scaling” on heroku, but once i did, i realized it could indeed save us lots of money. i am more interested in scaling rails background workers than i a web workers though — our background workers are busiest when we are doing “ingests” into our digital collections/digital asset management system, so the work is highly variable. auto-scaling up to more when there is ingest work piling up can give us really nice inget throughput while keeping costs low. on the other hand, our web traffic is fairly low and probably isn’t going to go up by an order of magnitude (non-profit cultural institution here). and after discovering that a “standard” dyno is just too slow, we will likely be running a performance-m or performance-l anyway — which likely can handle all anticipated traffic on it’s own. if we have an auto-scaling solution, we might configure it for web dynos, but we are especially interested in good features for background scaling. there is a heroku built-in autoscale feature, but it only works for performance dynos, and won’t do anything for rails background job dynos, so that was right out. that could work for rails bg jobs, the rails autoscale add-on on the heroku marketplace; and then we found hirefire.io. pricing: pretty different hirefire as of now january , hirefire.io has pretty simple and affordable pricing. $ /month/heroku application. auto-scaling as many dynos and process types as you like. hirefire.io by default can only check into your apps metrics to decide if a scaling event can occur once per minute. if you want more frequent than that (up to once every seconds), you have to pay an additional $ /month, for $ /month/heroku application. even though it is not a heroku add-on, hirefire does advertise that they bill pro-rated to the second, just like heroku and heroku add-ons. rails autoscale rails autoscale has a more tiered approach to pricing that is based on number and type of dynos you are scaling. starting at $ /month for - standard dynos, the next tier up is $ for up to standard dynos, all the way up to $ (!) for to dynos. if you have performance dynos involved, from $ /month for - performance dynos, up to $ /month for up to performance dynos. for our anticipated uses… if we only scale bg dynos, i might want to scale from (low) or to (high) or standard dynos, so we’d be at $ /month. our web dynos are likely to be performance and i wouldn’t want/need to scale more than probably , but that puts us into performance dyno tier, so we’re looking at $ /month. this is of course significantly more expensive than hirefire.io’s flat rate. metric resolution since hirefire had an additional charge for finer than -minute resolution on checks for autoscaling, we’ll discuss resolution here in this section too. rails autoscale has same resolution for all tiers, and i think it’s generally seconds, so approximately the same as hirefire if you pay the extra $ for increased resolution. configuration let’s look at configuration screens to get a sense of feature-sets. rails autoscale web dynos to configure web dynos, here’s what you get, with default values: the metric rails autoscale uses for scaling web dynos is time in heroku routing queue, which seems right to me — when things are spending longer in heroku routing queue before getting to a dyno, it means scale up. worker dynos for scaling worker dynos, rails autoscale can scale dyno type named “worker” — it can understand ruby queuing libraries sidekiq, resque, delayed job, or que. i’m not certain if there are options for writing custom adapter code for other backends. here’s what the configuration options are — sorry these aren’t the defaults, i’ve already customized them and lost track of what defaults are. you can see that worker dynos are scaled based on the metric “number of jobs queued”, and you can tell it to only pay attention to certain queues if you want. hirefire hirefire has far more options for customization than rails autoscale, which can make it a bit overwhelming, but also potentially more powerful. web dynos you can actually configure as many heroku process types as you have for autoscale, not just ones named “web” and “worker”. and for each, you have your choice of several metrics to be used as scaling triggers. for web, i think queue time (percentile, average) matches what rails autoscale does, configured to percentile, , and is probably the best to use unless you have a reason to use another. (“rails autoscale tracks the th percentile queue time, which for most applications will hover well below the default threshold of ms.“) here’s what configuration hirefire makes available if you are scaling on “queue time” like rails autoscale, configuration may vary for other metrics. i think if you fill in the right numbers, you can configure to work equivalently to rails autoscale. worker dynos if you have more than one heroku process type for workers — say, working on different queues — hirefire can scale the independently, with entirely separate configuration. this is pretty handy, and i don’t think rails autoscale offers this. (update i may be wrong, rails autoscale says they do support this, so check on it yourself if it matters to you). for worker dynos, you could choose to scale based on actual “dyno load”, but i think this is probably mostly for types of processes where there isn’t the ability to look at “number of jobs”. a “number of jobs in queue” like rails autoscale does makes a lot more sense to me as an effective metric for scaling queue-based bg workers. hirefire’s metric is slightly difererent than rails autoscale’s “jobs in queue”. for recognized ruby queue systems (a larger list than rails autoscale’s; and you can write your own custom adapter for whatever you like), it actually measures jobs in queue plus workers currently busy. so queued+in-progress, rather than rails autoscale’s just queued. i actually have a bit of trouble wrapping my head around the implications of this, but basically, it means that hirefire’s “jobs in queue” metric strategy is intended to try to scale all the way to emptying your queue, or reaching your max scale limit, whichever comes first. i think this may make sense and work out at least as well or perhaps better than rails autoscale’s approach? here’s what configuration hirefire makes available for worker dynos scaling on “job queue” metric. since the metric isn’t the same as rails autosale, we can’t configure this to work identically. but there are a whole bunch of configuration options, some similar to rails autoscale’s. the most important thing here is that “ratio” configuration. it may not be obvious, but with the way the hirefire metric works, you are basically meant to configure this to equal the number of workers/threads you have on each dyno. i have it configured to because my heroku worker processes use resque, with resque_pool, configured to run resque workers on each dyno. if you use sidekiq, set ratio to your configured concurrency — or if you are running more than one sidekiq process, processes*concurrency. basically how many jobs your dyno can be concurrently working is what you should normally set for ‘ratio’. hirefire not a heroku plugin hirefire isn’t actually a heroku plugin. in addition to that meaning separate invoicing, there can be some other inconveniences. since hirefire only can interact with heroku api, for some metrics (including the “queue time” metric that is probably optimal for web dyno scaling) you have to configure your app to log regular statistics to heroku’s “logplex” system. this can add a lot of noise to your log, and for heroku logging add-ons that are tired based on number of log lines or bytes, can push you up to higher pricing tiers. if you use paperclip, i think you should be able to use the log filtering feature to solve this, keep that noise out of your logs and avoid impacting data log transfer limits. however, if you ever have cause to look at heroku’s raw logs, that noise will still be there. support and docs i asked a couple questions of both hirefire and rails autoscale as part of my evaluation, and got back well-informed and easy-to-understand answers quickly from both. support for both seems to be great. i would say the documentation is decent-but-not-exhaustive for both products. hirefire may have slightly more complete documentation. other features? there are other things you might want to compare, various kinds of observability (bar chart or graph of dynos or observed metrics) and notification. i don’t have time to get into the details (and didn’t actually spend much time exploring them to evaluate), but they seem to offer roughly similar features. conclusion rails autoscale is quite a bit more expensive than hirefire.io’s flat rate, once you get past rails autoscale’s most basic tier (scaling no more than standard dynos). it’s true that autoscaling saves you money over not, so even an expensive price could be considered a ‘cut’ of that, and possibly for many ecommerce sites even $ a month might a drop in the bucket (!)…. but this price difference is so significant with hirefire (which has flat rate regardless of dynos), that it seems to me it would take a lot of additional features/value to justify. and it’s not clear that rails autoscale has any feature advantage. in general, hirefire.io seems to have more features and flexibility. until , hirefire.io could only analyze metrics with -minute resolution, so perhaps that was a “killer feature”? honestly i wonder if this price difference is sustained by rails autoscale only because most customers aren’t aware of hirefire.io, it not being listed on the heroku marketplace? single-invoice billing is handy, but probably not worth $ + a month. i guess hirefire’s logplex noise is a bit inconvenient? or is there something else i’m missing? pricing competition is good for the consumer. and are there any other heroku autoscale solutions, that can handle rails bg job dynos, that i still don’t know about? update a day after writing djcp on a reddit thread writes: i used to be a principal engineer for the heroku add-ons program. one issue with hirefire is they request account level oauth tokens that essentially give them ability to do anything with your apps, where rails autoscaling worked with us to create a partnership and integrate with our “official” add-on apis that limits security concerns and are scoped to the application that’s being scaled. part of the reason for hirefire working the way it does is historical, but we’ve supported the endpoints they need to scale for “official” partners for years now. a lot of heroku customers use hirefire so please don’t think i’m spreading fud, but you should be aware you’re giving a third party very broad rights to do things to your apps. they probably won’t, of course, but what if there’s a compromise? “official” add-on providers are given limited scoped tokens to (mostly) only the actions / endpoints they need, minimizing blast radius if they do get compromised. you can read some more discussion at that thread. jrochkind general comments january , january , managed solr saas options i was recently looking for managed solr “software-as-a-service” (saas) options, and had trouble figuring out what was out there. so i figured i’d share what i learned. even though my knowledge here is far from exhaustive, and i have only looked seriously at one of the ones i found. the only managed solr options i found were: websolr; searchstax; and opensolr. of these, i think websolr and searchstax are more well-known, i couldn’t find anyone with experience with opensolr, which perhaps is newer. of them all, searchstax is the only one i actually took for a test drive, so will have the most to say about. why we were looking we run a fairly small-scale app, whose infrastructure is currently self-managed aws ec instances, running respectively: ) a rails web app ) bg workers for the rails web app ) postgres, and ) solr. oh yeah, there’s also a redis running one of those servers, on # with pg or # with solr, i forget. currently we manage this all ourselves, right on the ec . but we’re looking to move as much as we can into “managed” servers. perhaps we’ll move to heroku. perhaps we’ll use hatchbox. or if we do stay on aws resources we manage directly, we’d look at things like using an aws rds postgres instead of installing it on an ec ourselves, an aws elasticache for redis, maybe look into elastic beanstalk, etc. but no matter what we do, we need a solr, and we’d like to get it managed. hatchbox has no special solr support, aws doesn’t have a solr service, heroku does have a solr add-on but you can also use any solr with it and we’ll get to that later. our current solr use is pretty small scale. we don’t run “solrcloud mode“, just legacy ordinary solr. we only have around , documents in there (tiny for solr), our index size is only mb. our traffic is pretty low — when i tried to figure out how low, it doesn’t seem we have sufficient logging turned on to answer that specifically but using proxy metrics to guess i’d say k- k requests a day, query as well as add. this is a pretty small solr installation, although it is used centrally for the primary functions of the (fairly low-traffic) app. it currently runs on an ec t a.small, which is a “burstable” ec type with only g of ram. it does have two vcpus (that is one core with ‘hyperthreading’). the t a.small ec instance only costs $ /month on-demand price! we know we’ll be paying more for managed solr, but we want to do get out of the business of managing servers — we no longer really have the staff for it. websolr (didn’t actually try out) websolr is the only managed solr currently listed as a heroku add-on. it is also available as a managed solr independent of heroku. the pricing in the heroku plans vs the independent plans seems about the same. as a heroku add-on there is a $ “staging” plan that doesn’t exist in the independent plans. (unlike some other heroku add-ons, no time-limited free plan is available for websolr). but once we go up from there, the plans seem to line up. starting at: $ /month for: million document limit k requests/day index mb storage concurrent requests limit (this limit is not mentioned on the independent pricing page?) next level up is $ /month for: million document limit k requests/day . gb storage concurrent request limit (again concurrent request limits aren’t mentioned on independent pricing page) as you can see, websolr has their plans metered by usage. $ /month is around the price range we were hoping for (we’ll need two, one for staging one for production). our small solr is well under million documents and ~ gb storage, and we do only use one index at present. however, the k requests/day limit i’m not sure about, even if we fit under it, we might be pushing up against it. and the “concurrent request” limit simply isn’t one i’m even used to thinking about. on a self-managed solr it hasn’t really come up. what does “concurrent” mean exactly in this case, how is it measured? with puma web workers and sometimes a possibly multi-threaded batch index going on, could we exceed a limit of ? seems plausible. what happens when they are exceeded? your solr request results in an http error! do i need to now write the app to rescue those gracefully, or use connection pooling to try to avoid them, or something? having to rewrite the way our app functions for a particular managed solr is the last thing we want to do. (although it’s not entirely clear if those connection limits exist on the non-heroku-plugin plans, i suspect they do?). and in general, i’m not thrilled with the way the pricing works here, and the price points. i am positive for a lot of (eg) heroku customers an additional $ * =$ /month is peanuts not even worth accounting for, but for us, a small non-profit whose app’s traffic does not scale with revenue, that starts to be real money. it is not clear to me if websolr installations (at “standard” plans) are set up in “solrcloud mode” or not; i’m not sure what api’s exist for uploading your custom schema.xml (which we’d need to do), or if they expect you to do this only manually through a web ui (that would not be good); i’m not sure if you can upload custom solrconfig.xml settings (this may be running on a shared solr instance with standard solrconfig.xml?). basically, all of this made websolr not the first one we looked at. does it matter if we’re on heroku using a managed solr that’s not a heroku plugin? i don’t think so. in some cases, you can get a better price from a heroku plug-in than you could get from that same vendor not on heroku or other competitors. but that doesn’t seem to be the case here, and other that that does it matter? well, all heroku plug-ins are required to bill you by-the-minute, which is nice but not really crucial, other forms of billing could also be okay at the right price. with a heroku add-on, your billing is combined into one heroku invoice, no need to give a credit card to anyone else, and it can be tracked using heroku tools. which is certainly convenient and a plus, but not essential if the best tool for the job is not a heroku add-on. and as a heroku add-on, websolr provides a websolr_url heroku config/env variable automatically to code running on heroku. ok, that’s kind of nice, but it’s not a big deal to set a solr_url heroku config manually referencing the appropriate address. i suppose as a heroku add-on, websolr also takes care of securing and authenticating connections between the heroku dynos and the solr, so we need to make sure we have a reasonable way to do this from any alternative. searchstax (did take it for a spin) searchstax’s pricing tiers are not based on metering usage. there are no limits based on requests/day or concurrent connections. searchstax runs on dedicated-to-you individual solr instances (i would guess running on dedicated-to-you individual (eg) ec , but i’m not sure). instead the pricing is based on size of host running solr. you can choose to run on instances deployed to aws, google cloud, or azure. we’ll be sticking to aws (the others, i think, have a slight price premium). while searchstax gives you a pricing pages that looks like the “new-way-of-doing-things” transparent pricing, in fact there isn’t really enough info on public pages to see all the price points and understand what you’re getting, there is still a kind of “talk to a salesperson who has a price sheet” thing going on. what i think i have figured out from talking to a salesperson and support, is that the “silver” plans (“starting at $ a month”, although we’ll say more about that in a bit) are basically: we give you a solr, we don’t don’t provide any technical support for solr. while the “gold” plans “from $ /month” are actually about paying for solr consultants to set up and tune your schema/index etc. that is not something we need, and $ +/month is way more than the price range we are looking for. while the searchstax pricing/plan pages kind of imply the “silver” plan is not suitable for production, in fact there is no real reason not to use it for production i think, and the salesperson i talked to confirmed that — just reaffirming that you were on your own managing the solr configuration/setup. that’s fine, that’s what we want, we just don’t want to mangage the os or set up the solr or upgrade it etc. the silver plans have no sla, but as far as i can tell their uptime is just fine. the silver plans only guarantees -hour support response time — but for the couple support tickets i filed asking questions while under a free -day trial (oh yeah that’s available), i got prompt same-day responses, and knowledgeable responses that answered my questions. so a “silver” plan is what we are interested in, but the pricing is not actually transparent. $ /month is for the smallest instance available, and if you prepay/contract for a year. they call that small instance an ndn and it has gb of ram and gb of storage. if you pay-as-you-go instead of contracting for a year, that already jumps to $ /month. (that price is available on the trial page). when you are paying-as-you-go, you are actually billed per-day, which might not be as nice as heroku’s per-minute, but it’s pretty okay, and useful if you need to bring up a temporary solr instance as part of a migration/upgrade or something like that. the next step up is an “ndn ” which has g of ram and gb of storage, and has an ~$ /month pay-as-you-go — you can find that price if you sign-up for a free trial. the discount price price for an annual contract is a discount similar to the ndn %, $ /month — that price i got only from a salesperson, i don’t know if it’s always stable. it only occurs to me now that they don’t tell you how many cpus are available. i’m not sure if i can fit our solr in the g ndn , but i am sure i can fit it in the g ndn with some headroom, so i didn’t look at plans above that — but they are available, still under “silver”, with prices going up accordingly. all searchstax solr instances run in “solrcloud” mode — these ndn and ndn ones we’re looking at just run one node with one zookeeper, but still in cloud mode. there are also “silver” plans available with more than one node in a “high availability” configuration, but the prices start going up steeply, and we weren’t really interested in that. because it’s solrcloud mode though, you can use the standard solr api for uploading your configuration. it’s just solr! so no arbitrary usage limits, no features disabled. the searchstax web console seems competently implemented; it let’s you create and delete individual solr “deployments”, manage accounts to login to console (on “silver” plan you only get two, or can pay $ /month/account for more, nah), and set up auth for a solr deployment. they support ip-based authentication or http basic auth to the solr (no limit to how many solr basic auth accounts you can create). http basic auth is great for us, because trying to do ip-based from somewhere like heroku isn’t going to work. all solrs are available over https/ssl — great! searchstax also has their own proprietary http api that lets you do most anything, including creating/destroying deployments, managing solr basic auth users, basically everything. there is some api that duplicates the solr cloud api for adding configsets, i don’t think there’s a good reason to use it instead of standard solrcloud api, although their docs try to point you to it. there’s even some kind of webhooks for alerts! (which i haven’t really explored). basically, searchstax just seems to be a sane and rational managed solr option, it has all the features you’d expect/need/want for dealing with such. the prices seem reasonable-ish, generally more affordable than websolr, especially if you stay in “silver” and “one node”. at present, we plan to move forward with it. opensolr (didn’t look at it much) i have the least to say about this, have spent the least time with it, after spending time with searchstax and seeing it met our needs. but i wanted to make sure to mention it, because it’s the only other managed solr i am even aware of. definitely curious to hear from any users. here is the pricing page. the prices seem pretty decent, perhaps even cheaper than searchstax, although it’s unclear to me what you get. does “ solr clusters” mean that it’s not solrcloud mode? after seeing how useful solrcloud apis are for management (and having this confirmed by many of my peers in other libraries/museums/archives who choose to run solrcloud), i wouldn’t want to do without it. so i guess that pushes us to “executive” tier? which at $ /month (billed yearly!) is still just fine, around the same as searchstax. but they do limit you to one solr index; i prefer searchstax’s model of just giving you certain host resources and do what you want with it. it does say “shared infrastructure”. might be worth investigating, curious to hear more from anyone who did. now, what about elasticsearch? we’re using solr mostly because that’s what various collaborative and open source projects in the library/museum/archive world have been doing for years, since before elasticsearch even existed. so there are various open source libraries and toolsets available that we’re using. but for whatever reason, there seem to be so many more managed elasticsearch saas available. at possibly much cheaper pricepoints. is this because the elasticsearch market is just bigger? or is elasticsearch easier/cheaper to run in a saas environment? or what? i don’t know. but there’s the controversial aws elasticsearch service; there’s the elastic cloud “from the creators of elasticsearch”. on heroku that lists one solr add-on, there are three elasticsearch add-ons listed: elasticcloud, bonsai elasticsearch, and searchbox elasticsearch. if you just google “managed elasticsearch” you immediately see or other names. i don’t know enough about elasticsearch to evaluate them. there seem on first glance at pricing pages to be more affordable, but i may not know what i’m comparing and be looking at tiers that aren’t actually usable for anything or will have hidden fees. but i know there are definitely many more managed elasticsearch saas than solr. i think elasticsearch probably does everything our app needs. if i were to start from scratch, i would definitely consider elasticsearch over solr just based on how many more saas options there are. while it would require some knowledge-building (i have developed a lot of knowlege of solr and zero of elasticsearch) and rewriting some parts of our stack, i might still consider switching to es in the future, we don’t do anything too too complicated with solr that would be too too hard to switch to es, probably. jrochkind general leave a comment january , january , gem authors, check your release sizes most gems should probably be a couple hundred kb at most. i’m talking about the package actually stored in and downloaded from rubygems by an app using the gem. after all, source code is just text, and it doesn’t take up much space. ok, maybe some gems have a couple images in there. but if you are looking at your gem in rubygems and realize that it’s mb or bigger… and that it seems to be getting bigger with every release… something is probably wrong and worth looking into it. one way to look into it is to look at the actual gem package. if you use the handy bundler rake task to release your gem (and i recommend it), you have a ./pkg directory in your source you last released from. inside it are “.gem” files for each release you’ve made from there, unless you’ve cleaned it up recently. .gem files are just .tar files it turns out. that have more tar and gz files inside them etc. we can go into it, extract contents, and use the handy unix utility du -sh to see what is taking up all the space. how i found the bytes jrochkind-chf kithe (master ?) $ cd pkg jrochkind-chf pkg (master ?) $ ls kithe- . . .beta .gem kithe- . . .pre.rc .gem kithe- . . .gem kithe- . . .gem kithe- . . .pre.beta .gem kithe- . . .gem jrochkind-chf pkg (master ?) $ mkdir exploded jrochkind-chf pkg (master ?) $ cp kithe- . . .gem exploded/kithe- . . .tar jrochkind-chf pkg (master ?) $ cd exploded jrochkind-chf exploded (master ?) $ tar -xvf kithe- . . .tar x metadata.gz x data.tar.gz x checksums.yaml.gz jrochkind-chf exploded (master ?) $ mkdir unpacked_data_tar jrochkind-chf exploded (master ?) $ tar -xvf data.tar.gz -c unpacked_data_tar/ jrochkind-chf exploded (master ?) $ cd unpacked_data_tar/ /users/jrochkind/code/kithe/pkg/exploded/unpacked_data_tar jrochkind-chf unpacked_data_tar (master ?) $ du -sh * . k mit-license k readme.md . k rakefile k app . k config k db k lib m spec jrochkind-chf unpacked_data_tar (master ?) $ cd spec jrochkind-chf spec (master ?) $ du -sh * . k derivative_transformers m dummy k factories k indexing k models . k rails_helper.rb k shrine k simple_form_enhancements . k spec_helper.rb k test_support . k validators jrochkind-chf spec (master ?) $ cd dummy/ jrochkind-chf dummy (master ?) $ du -sh * . k rakefile k app k bin k config . k config.ru . k db m log . k package.json k public . k tmp doh! in this particular gem, i have a dummy rails app, and it has mb of logs, cause i haven’t b bothered trimming them in a while, that are winding up including in the gem release package distributed to rubygems and downloaded by all consumers! even if they were small, i don’t want these in the released gem package at all! that’s not good! it only turns into mb instead of mb, because log files are so compressable and there is compression involved in assembling the rubygems package. but i have no idea how much space it’s actually taking up on consuming applications machines. this is very irresponsible! what controls what files are included in the gem package? your .gemspec file of course. the line s.files = is an array of every file to include in the gem package. well, plus s.test_files is another array of more files, that aren’t supposed to be necessary to run the gem, but are to test it. (rubygems was set up to allow automated *testing* of gems after download, is why test files are included in the release package. i am not sure how useful this is, and who if anyone does it; although i believe that some linux distro packagers try to make use of it, for better or worse). but nobody wants to list every file in your gem individually, manually editing the array every time you add, remove, or move one. fortunately, gemspec files are executable ruby code, so you can use ruby as a shortcut. i have seen two main ways of doing this, with different “gem skeleton generators” taking one of two approaches. sometimes a shell out to git is used — the idea is that everything you have checked into your git should be in the gem release package, no more or no less. for instance, one of my gems has this in it, not sure where it came from or who/what generated it. spec.files = `git ls-files -z`.split("\x ").reject do |f| f.match(%r{^(test|spec|features)/}) end in that case, it wouldn’t have included anything in ./spec already, so this obviously isn’t actually the gem we were looking at before. but in this case, in addition to using ruby logic to manipulate the results, nothing excluded by your .gitignore file will end up included in your gem package, great! in kithe we were looking at before, those log files were in the .gitignore (they weren’t in my repo!), so if i had been using that git-shellout technique, they wouldn’t have been included in the ruby release already. but… i wasn’t. instead this gem has a gemspec that looks like: s.test_files = dir["spec/*/"] just include every single file inside ./spec in the test_files list. oops. then i get all those log files! one way to fix i don’t really know which is to be preferred of the git-shellout approach vs the dir-glob approach. i suspect it is the subject of historical religious wars in rubydom, when there were still more people around to argue about such things. any opinions? or another approach? without being in the mood to restructure this gemspec in anyway, i just did the simplest thing to keep those log files out… dir["spec/*/"].delete_if {|a| a =~ %r{/dummy/log/}} build the package without releasing with the handy bundler supplied rake build task… and my gem release package size goes from mb to k. (which actually kind of sounds like a minimum block size or something, right?) phew! that’s a big difference! sorry for anyone using previous versions and winding up downloading all that cruft! (actually this particular gem is mostly a proof of concept at this point and i don’t think anyone else is using it). check your gem sizes! i’d be willing to be there are lots of released gems with heavily bloated release packages like this. this isn’t the first one i’ve realized was my fault. because who pays attention to gem sizes anyway? apparently not many! but rubygems does list them, so it’s pretty easy to see. are your gem release packages multiple megs, when there’s no good reason for them to be? do they get bigger every release by far more than the bytes of lines of code you think were added? at some point in gem history was there a big jump from hundreds of kb to multiple mb? when nothing particularly actually happened to gem logic to lead to that? all hints that you might be including things you didn’t mean to include, possibly things that grow each release. you don’t need to have a dummy rails app in your repo to accidentally do this (i accidentally did it once with a gem that had nothing to do with rails). there could be other kind of log files. or test coverage or performance metric files, or any other artifacts of your build or your development, especially ones that grow over time — that aren’t actually meant to or needed as part of the gem release package! it’s good to sanity check your gem release packages now and then. in most cases, your gem release package should be hundreds of kb at most, not mbs. help keep your users’ installs and builds faster and slimmer! jrochkind general leave a comment january , every time you decide to solve a problem with code… every time you decide to solve a problem with code, you are committing part of your future capacity to maintaining and operating that code. software is never done. software is drowning the world by james abley jrochkind general leave a comment january , posts navigation older posts bibliographic wilderness is a blog by jonathan rochkind about digital library services, ruby, and web development. contact search for: email subscription enter your email address to subscribe to this blog and receive notifications of new posts by email. join other followers email address: subscribe recent posts logging uri query params with lograge august , notes on cloudfront in front of rails assets on heroku, with cors june , activesupport::cache via activerecord (note to self) june , heroku release phase, rails db:migrate, and command failure june , code that lasts: sustainable and usable open source code march , archives archives select month august  ( ) june  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) april  ( ) march  ( ) december  ( ) october  ( ) september  ( ) august  ( ) june  ( ) april  ( ) march  ( ) february  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) june  ( ) may  ( ) march  ( ) february  ( ) january  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) feeds  rss - posts  rss - comments recent comments the return of the semantic web? – deeply semantic on “is the semantic web still a thing?” jrochkind on rails auto-scaling on heroku adam (rails autoscale) on rails auto-scaling on heroku on catalogers, programmers, and user tasks – gavia libraria on broad categories from class numbers replacing marc – gavia libraria on linked data caution jrochkind on deep dive: moving ruby projects from travis to github actions for ci jrochkind on deep dive: moving ruby projects from travis to github actions for ci jrochkind on deep dive: moving ruby projects from travis to github actions for ci top posts purposes/functions of controlled vocabulary bootstrap to : changes in how font size, line-height, and spacing is done. or "what happened to $line-height-computed." yes, product owner and technical lead need to be different people logging uri query params with lograge some notes on what's going on in activestorage top clicks blog.travis-ci.com/ - … aws.amazon.com/premiumsup… kunststube.net/encoding searchstax.com/docs/staxa… github.com/twbs/bootstrap… a blog by jonathan rochkind. all original content licensed cc-by. create a website or blog at wordpress.com   loading comments...   write a comment... email (required) name (required) website privacy & cookies: this site uses cookies. by continuing to use this website, you agree to their use. to find out more, including how to control cookies, see here: cookie policy dshr's blog dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, august , the economist on cryptocurrencies the economist edition dated august th has a leader (unstablecoins) and two articles (in the finance section (the disaster scenario and here comes the sheriff). source the leader argues that: regulators must act quickly to subject stablecoins to bank-like rules for transparency, liquidity and capital. those failing to comply should be cut off from the financial system, to stop people drifting into an unregulated crypto-ecosystem. policymakers are right to sound the alarm, but if stablecoins continue to grow, governments will need to move faster to contain the risks. but even the economist gets taken in by the typical cryptocurrency hype, balancing current actual risks against future possible benefits: yet it is possible that regulated private-sector stablecoins will eventually bring benefits, such as making cross-border payments easier, or allowing self-executing “smart contracts”. regulators should allow experiments whose goal is not merely to evade financial rules. they don't seem to understand that, just as the whole point of uber is to evade the rules for taxis, the whole point of cryptocurrency is to "evade financial rules". below the fold i comment on the two articles. read more » posted by david. at : am comments: labels: bitcoin tuesday, august , stablecoins part i wrote stablecoins about tether and its "magic money pump" seven months ago. a lot has happened and a lot has been written about it since, and some of it explores aspects i didn't understand at the time, so below the fold at some length i try to catch up. read more » posted by david. at : am comments: labels: bitcoin thursday, july , economics of evil revisited eight years ago i wrote economics of evil about the death of google reader and google's habit of leaving its customers users in the lurch. in the comments to the post i started keeping track of accessions to le petit musée des projets google abandonnés. so far i've recorded at least dead products, an average of more than a year. two years ago ron amadeo wrote about the problem this causes in google’s constant product shutdowns are damaging its brand: we are days into the year, and so far, google is racking up an unprecedented body count. if we just take the official shutdown dates that have already occurred in , a google-branded product, feature, or service has died, on average, about every nine days. below the fold, some commentary on amadeo's latest report from the killing fields, in which he detects a little remorse. read more » posted by david. at : am comment: labels: cloud economics tuesday, july , yet another dna storage technique source an alternative approach to nucleic acid memory by george d. dickinson et al from boise state university describes a fundamentally different way to store and retrieve data using dna strands as the medium. will hughes et al have an accessible summary in dna ‘lite-brite’ is a promising way to archive data for decades or longer: we and our colleagues have developed a way to store data using pegs and pegboards made out of dna and retrieving the data with a microscope – a molecular version of the lite-brite toy. our prototype stores information in patterns using dna strands spaced about nanometers apart. below the fold i look at the details of the technique they call digital nucleic acid memory (dnam). read more » posted by david. at : am no comments: labels: storage media tuesday, july , alternatives to proof-of-work the designers of peer-to-peer consensus protocols such as those underlying cryptocurrencies face three distinct problems. they need to prevent: being swamped by a multitude of sybil peers under the control of an attacker. this requires making peer participation expensive, such as by proof-of-work (pow). pow is problematic because it has a catastrophic carbon footprint. a rational majority of peers from conspiring to obtain inappropriate benefits. this is thought to be achieved by decentralization, that is a network of so many peers acting independently that a conspiracy among a majority of them is highly improbable. decentralization is problematic because in practice all successful cryptocurrencies are effectively centralized. a rational minority of peers from conspiring to obtain inappropriate benefits. this requirement is called incentive compatibility. this is problematic because it requires very careful design of the protocol. in the rather long post below the fold i focus on some potential alternatives to pow, inspired by jeremiah wagstaff's subspace: a solution to the farmer’s dilemma, the white paper for a new blockchain technology. read more » posted by david. at : am comments: labels: bitcoin, p p thursday, july , a modest proposal about ransomware on the evening of july nd the revil ransomware gang exploited a -day vulnerability to launch a supply chain attack on customers of kaseya's virtual system administrator (vsa) product. the timing was perfect, with most system administrators off for the july th long weekend. by the th alex marquardt reported that kaseya says up to , businesses compromised in massive ransomware attack. revil, which had previously extorted $ m from meat giant jbs, announced that for the low, low price of only $ m they would provide everyone with a decryptor. the us government's pathetic response is to tell the intelligence agencies to investigate and to beg putin to crack down on the ransomware gangs. good luck with that! it isn't his problem, because the gangs write their software to avoid encrypting systems that have default languages from the former ussr. i've writtten before (here, here, here) about the importance of disrupting the cryptocurrency payment channel that enables ransomware, but it looks like the ransomware crisis has to get a great deal worse before effective action is taken. below the fold i lay out a modest proposal that could motivate actions that would greatly reduce the risk. read more » posted by david. at : am comments: labels: malware, security tuesday, july , intel did a boeing two years ago, wolf richter noted that boeing's failure to invest in a successor airframe was a major cause of the max debacle: from through q , boeing has blown a mind-boggling $ billion on share buybacks i added up the opportunity costs: suppose instead of buying back stock, boeing had invested in its future. even assuming an entirely new replacement for the series was as expensive as the (the first of a new airframe technology), they could have delivered the first replacement ($ b), and be almost % through developing another entirely new airframe ($ b/$ b). but executive bonuses and stock options mattered more than the future of the company's cash cow product. below the fold i look at how intel made the same mistake as boeing, and early signs that they have figured out what went wrong. read more » posted by david. at : am comments: labels: stock buybacks older posts home subscribe to: posts (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ▼  august ( ) the economist on cryptocurrencies stablecoins part ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. the economic limits of bitcoin and the blockchain | nber skip to main content subscribe media open calls close search research explore research findings working papers books & chapters lectures research spotlights periodicals the digest the reporter the bulletin on retirement and disability the bulletin on health the bulletin on entrepreneurship periodicals archive data & business cycles boston research data center business cycle dating public use data archive topics covid- unemployment and immigration energy entrepreneurship trade programs & projects explore programs & projects programs economics of aging asset pricing children corporate finance development economics development of the american economy economic fluctuations and growth economics of education environment and energy economics health care health economics industrial organization international finance and macroeconomics international trade and investment labor studies law and economics monetary economics political economy productivity, innovation, and entrepreneurship public economics working groups behavioral finance chinese economy cohort studies economics of crime entrepreneurship household finance innovation policy insurance market design organizational economics personnel economics race and stratification in the economy risks of financial institutions urban economics projects & centers center for aging and health research conference on econometrics and mathematical economics conference on research in income and wealth economics of digitization gender in the economy illinois workplace wellness study improving health outcomes for an aging population macroeconomics annual measuring the clinical and economic outcomes associated with delivery systems oregon health insurance experiment retirement and disability research center the roybal center for behavior change in health satellite national health accounts science of science funding training program in aging and health economics transportation economics in the st century union army data & early indicators value of medical research women working longer conferences affiliated scholars nber news explore nber news research in the news nobel laureates featured working papers archive career resources explore career resources ra positions – at nber ra positions – not at the nber staff positions at nber calls for fellowship applications current fellowship recipients phd candidates in economics about explore about leadership & governance support & funding history standards of conduct privacy policy staff employment opportunities at nber subscribe media open calls search home research working papers the economic limits of bitcoin and the… the economic limits of bitcoin and the blockchain eric budish share twitter linkedin email working paper doi . /w issue date june the amount of computational power devoted to anonymous, decentralized blockchains such as bitcoin's must simultaneously satisfy two conditions in equilibrium: ( ) a zero-profit condition among miners, who engage in a rent-seeking competition for the prize associated with adding the next block to the chain; and ( ) an incentive compatibility condition on the system's vulnerability to a “majority attack”, namely that the computational costs of such an attack must exceed the benefits. together, these two equations imply that ( ) the recurring, “flow”, payments to miners for running the blockchain must be large relative to the one-off, “stock”, benefits of attacking it. this is very expensive! the constraint is softer (i.e., stock versus stock) if both (i) the mining technology used to run the blockchain is both scarce and non-repurposable, and (ii) any majority attack is a “sabotage” in that it causes a collapse in the economic value of the blockchain; however, reliance on non-repurposable technology for security and vulnerability to sabotage each raise their own concerns, and point to specific collapse scenarios. in particular, the model suggests that bitcoin would be majority attacked if it became sufficiently economically important — e.g., if it became a “store of value” akin to gold — which suggests that there are intrinsic economic limits to how economically important it can become in the first place. acknowledgements and disclosures project start date: feb , . first public draft: may , . for the record, the first large-stakes majority attack of a well-known cryptocurrency, the $ m attack on bitcoin gold, occurred a few weeks later in mid-may (wilmoth, ; wong, ). acknowledgments: thanks are due to susan athey, vitalik buterin, alex frankel, joshua gans, austan goolsbee, zhiguo he, joi ito, steve kaplan, anil kashyap, judd kessler, randall kroszner, robin lee, jacob leshno, neale mahoney, sendhil mullainathan, david parkes, john shim, scott stornetta, aviv zohar, and seminar participants at chicago booth and the mit digital currency initiative. natalia drozdoff and matthew o'keefe have provided excellent research assistance. disclosure: i do not have any financial interests in blockchain companies or cryptocurrencies, either long or short. the views expressed herein are those of the author and do not necessarily reflect the views of the national bureau of economic research. download citation marc ris bibteΧ download citation data related topics other general, teaching microeconomics market structure and distribution general equilibrium macroeconomics money and interest rates financial economics financial markets portfolio selection and asset pricing financial institutions behavioral finance industrial organization industry studies culture programs asset pricing corporate finance industrial organization monetary economics productivity, innovation, and entrepreneurship working groups entrepreneurship risks of financial institutions market design conferences si monetary economics mentioned in the news bitcoin too risky to go mainstream, central bank overseer says june , source: the star (toronto) read the research here. more from nber in addition to working papers, the nber disseminates affiliates’ latest findings through a range of free periodicals — the nber reporter, the nber digest, the bulletin on retirement and disability, and the bulletin on health — as well as online conference reports, video lectures, and interviews. the economics of digitization article author: shane greenstein the nber economics of digitization project, established in with support from the alfred p. sloan foundation, provides a forum for disseminating research... the martin feldstein lecture: journey across a century of women lecture claudia goldin, the henry lee professor of economics at harvard university and a past president of the american... summer institute methods lectures: differential privacy for economists lecture the extent to which individual responses to household surveys are protected from discovery by outside parties depends... national bureau of economic research contact us massachusetts ave. cambridge, ma - - info@nber.org follow accessibility: if you are experiencing difficulty with accessibility or usability, or would like to report a barrier in accessing our content, contact info@nber.org for help or more information. we take digital accessibility seriously and welcome the opportunity to improve the reach of our research. homepage privacy policy © national bureau of economic research. all rights reserved. ios zero-day let solarwinds hackers compromise fully updated iphones | ars technica skip to main content biz & it tech science policy cars gaming & culture store forums subscribe close navigate store subscribe videos features reviews rss feeds mobile site about ars staff directory contact us advertise with ars reprints filter by topic biz & it tech science policy cars gaming & culture store forums settings front page layout grid list site theme black on white white on black sign in comment activity sign up or login to join the discussions! stay logged in | having trouble? sign up to comment and more sign up zero-day explosion — ios zero-day let solarwinds hackers compromise fully updated iphones flaw was exploited when government officials clicked on links in linkedin messages. dan goodin - jul , : pm utc enlarge getty images reader comments with posters participating share this story share on facebook share on twitter share on reddit the russian state hackers who orchestrated the solarwinds supply chain attack last year exploited an ios zero-day as part of a separate malicious email campaign aimed at stealing web authentication credentials from western european governments, according to google and microsoft. further reading solarwinds hackers are back with a new mass campaign, microsoft says in a post google published on wednesday, researchers maddie stone and clement lecigne said a “likely russian government-backed actor” exploited the then-unknown vulnerability by sending messages to government officials over linkedin. moscow, western europe, and usaid attacks targeting cve- - , as the zero-day is tracked, redirected users to domains that installed malicious payloads on fully updated iphones. the attacks coincided with a campaign by the same hackers who delivered malware to windows users, the researchers said. further reading solarwinds hackers are back with a new mass campaign, microsoft says the campaign closely tracks to one microsoft disclosed in may. in that instance, microsoft said that nobelium—the name the company uses to identify the hackers behind the solarwinds supply chain attack—first managed to compromise an account belonging to usaid, a us government agency that administers civilian foreign aid and development assistance. with control of the agency’s account for online marketing company constant contact, the hackers could send emails that appeared to use addresses known to belong to the us agency. the federal government has attributed last year’s supply chain attack to hackers working for russia’s foreign intelligence service (abbreviated as svr). for more than a decade, the svr has conducted malware campaigns targeting governments, political think tanks, and other organizations in countries like germany, uzbekistan, south korea, and the us. targets have included the us state department and the white house in . other names used to identify the group include apt , the dukes, and cozy bear. in an email, shane huntley, the head of google's threat analysis group, confirmed the connection between the attacks involving usaid and the ios zero-day, which resided in the webkit browser engine. “these are two different campaigns, but based on our visibility, we consider the actors behind the webkit -day and the usaid campaign to be the same group of actors,” huntley wrote. “it is important to note that everyone draws actor boundaries differently. in this particular case, we are aligned with the us and uk governments' assessment of apt .” advertisement forget the sandbox throughout the campaign, microsoft said, nobelium experimented with multiple attack variations. in one wave, a nobelium-controlled web server profiled devices that visited it to determine what os and hardware the devices ran on. if the targeted device was an iphone or ipad, a server used an exploit for cve- - , which allowed hackers to deliver a universal cross-site scripting attack. apple patched the zero-day in late march. in wednesday’s post, stone and lecigne wrote: after several validation checks to ensure the device being exploited was a real device, the final payload would be served to exploit cve-​ - . this exploit would turn off same-origin-policy protections in order to collect authentication cookies from several popular websites, including google, microsoft, linkedin, facebook, and yahoo and send them via websocket to an attacker-controlled ip. the victim would need to have a session open on these websites from safari for cookies to be successfully exfiltrated. there was no sandbox escape or implant delivered via this exploit. the exploit targeted ios versions . through . . this type of attack, described by amy burnett in forget the sandbox escape: abusing browsers from code execution, is mitigated in browsers with site isolation enabled, such as chrome or firefox. it’s raining zero-days the ios attacks are part of a recent explosion in the use of zero-days. in the first half of this year, google’s project zero vulnerability research group has recorded zero-day exploits used in attacks— more than the total number from . the growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through. the other big driver is the increased supply of zero-days from private companies selling exploits. “ -day capabilities used to be only the tools of select nation-states who had the technical expertise to find -day vulnerabilities, develop them into exploits, and then strategically operationalize their use,” the google researchers wrote. “in the mid-to-late s, more private companies have joined the marketplace selling these -day capabilities. no longer do groups need to have the technical expertise; now they just need resources.” the ios vulnerability was one of four in-the-wild zero-days google detailed on wednesday. the other three were: cve- - and cve- - in chrome cve- - in internet explorer the four exploits were used in three different campaigns. based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. the researchers didn’t identify the surveillance company, the governments, or the specific three zero-days they were referring to. representatives from apple didn’t immediately respond to a request for comment. reader comments with posters participating share this story share on facebook share on twitter share on reddit dan goodin dan is the security editor at ars technica, which he joined in after working for the register, the associated press, bloomberg news, and other publications. email dan.goodin@arstechnica.com // twitter @dangoodin advertisement you must login or create an account to comment. channel ars technica ← previous story next story → related stories today on ars store subscribe about us rss feeds view mobile site contact us staff advertise with us reprints newsletter signup join the ars orbital transmission mailing list to get weekly updates delivered to your inbox. sign me up → cnmn collection wired media group © condé nast. all rights reserved. use of and/or registration on any portion of this site constitutes acceptance of our user agreement (updated / / ) and privacy policy and cookie statement (updated / / ) and ars technica addendum (effective / / ). ars may earn compensation on sales from links on this site. read our affiliate link policy. your california privacy rights | do not sell my personal information the material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of condé nast. ad choices library user experience community homepage open in app sign inget started practical design thinking for libraries library user experience community guest write ( - we pay!) our slack community followfollowing a library system for the future a library system for the future this is a what-if story. kelly daganfeb , latest alexa, get me the articles (voice interfaces in academia) alexa, get me the articles (voice interfaces in academia) thinking about interfaces has led me down a path of all sorts of exciting/mildly terrifying ways of interacting with our devices — from… kelly daganfeb , accessibility information on library websites accessibility information on library websites an important part of making your library accessible is advertising that your library’s spaces and services are accessible and inclusive. carli spinanov , is autocomplete on your library home page? is autocomplete on your library home page? literature and some testing i’ve done this semester convinces me that autocomplete fundamentally improves the user experience jaci paige wilkinsonaug , writing for the user experience with rebecca blakiston writing for the user experience with rebecca blakiston : | rebecca blakiston — author of books on usability testing and writing with clarity; library journal mover and shaker — talks shop in… michael schofieldaug , write for libux write for libux we should aspire to push the #libweb forward by creating content that sets the bar for the conversation way up there, and i would love your… michael schofieldapr , first look at primo’s new user interface first look at primo’s new user interface impressions of some key innovations of primo’s new ui as well as challenges involved making customizations. ron gilmourfeb , today, i learned about the accessibility tree today, i learned about the accessibility tree if you didn’t think your grip on web accessibility could get any looser. michael schofieldfeb , what users expect what users expect we thought it would be fun to emulate some of our favorite sites in a lightweight concept discovery layer we call libre. trey gordnerjan , critical librarianship in the design of libraries critical librarianship in the design of libraries design decisions position libraries to more deliberately influence the user experience toward advocacy — such as communicating moral or… michael schofieldjan , the non-reader persona the non-reader persona michael schofielddec , iu libraries’ redesign and the descending hero search iu libraries’ redesign and the descending hero search michael schofieldaug , accessible, sort of — #a eh michael schofieldjul , create once, publish everywhere create once, publish everywhere michael schofieldjul , web education must go further than a conference budget michael schofieldmay , blur the line between the website and the building michael schofieldnov , say “ok library” say “ok library” michael schofieldoct , unambitious and incapable men in librarianship unambitious and incapable men in librarianship michael schofieldoct , on the user experience of ebooks on the user experience of ebooks so, when it comes to ebooks i am in the minority: i prefer them to the real thing. the aesthetic or whats-it about the musty trappings of… michael schofieldoct , about library user experience communitylatest storiesarchiveabout mediumtermsprivacy dshr's blog: lack of anti-trust enforcement dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, august , lack of anti-trust enforcement the accelerating negative effects that have accumulated since the collapse of anti-trust enforcement in the us have been a prominent theme on this blog. this search currently returns posts stretching back to . recently, perhaps started by lina m. khan's masterful january yale law journal article amazon's antitrust paradox a consensus has been gradually emerging as to these negative effects. one problem for this consensus is that "real economists" don't believe the real world, they only believe mathematical models that produce approximations to the real world. now, yves smith's fed economists finger monopoly concentration as underlying driver of neoliberal economic restructuring; barry lynn in harpers and fortnite lawsuit put hot light on tech monopoly power covers three developments in the emerging anti-monopoly consensus: apple and google ganging up on epic games. lina m. khan's ex-boss barry lynn's the big tech extortion racket: how google, amazon, and facebook control our lives. market power, inequality, and financial instability by fed economists isabel cairó and jae sim the first two will have to wait for future posts, but the last of these may start to convince "real economists" because, as yves smith writes: they developed a model to simulate the impact of companies’ rising market power, in conjunction with the assumption that the owners of capital liked to hold financial assets (here, bonds) as a sign of social status. they wanted to see it it would explain six developments over the last forty years. ... and it did! follow me below the fold for the details. yves smith lists the six developments: real wage growth stagnating and lagging productivity growth pre-tax corporate profits rising rapidly relative to gdp increasing income inequality increasing wealth inequality higher household leverage increased financial instability the fit between cairó & sim's model and the real world was impressive: for a model to have pretty decent fit for so many variables is not trivial. the first are what the model coughed up, the second, in parenthesis, are the real world data: ... the authors did quite a few sensitivity test and also modeled some alternative explanations, as well as showing panels that compared model outputs over time to real economy outcomes. they also recommend wealth distribution as a way to dampen financial crises, or even just taxing dividends at a healthy level. here is cairó & sim's abstract: over the last four decades, the u.s. economy has experienced a few secular trends, each of which may be considered undesirable in some aspects: declining labor share; rising profit share; rising income and wealth inequalities; and rising household sector leverage, and associated financial instability. we develop a real business cycle model and show that the rise of market power of the firms in both product and labor markets over the last four decades can generate all of these secular trends. we derive macroprudential policy implications for financial stability. their model contains two kinds of agents, owners of capital who lend, and workers who borrow: the first type of agents, named agents k, whose population share is calibrated at percent, own monopolistically competitive firms and accumulate real (capital) and financial assets (bonds). the second type of agents, named agents w, whose population share is calibrated at percent, work for labor earnings and do not participate in capital market, but issue private bonds for consumption smoothing. the two types of agents interact in two markets. in the labor market, they bargain over the wage. in the credit market, agents k play the role of creditors and agents w the role of borrowers. we assign so-called spirit-of-capitalism preferences to agent k such that they earn direct utility from holding financial wealth, which is assumed to represent the social status what is going on here is that the agents k want to lend as much money as possible, so as to receive interest. in order for them to do this, it is necessary for the agents w to borrow as much money as possible: we show that such preferences are key in creating a direct link between income inequality and credit accumulation, as they control the marginal propensity to save (mps) out of permanent income shocks. ... we posit that the market power of the firms owned by agent k in both product market and labor market (in the form of bargaining power) steadily increases over time for three decades ( – ) and study the transitional dynamics of the model economy. agents k in the model want to lend money, rather than directly own productive capital resources. the authors tried changing this, so that agents k owned real capital assets, but this produced much less realistic results: since the investor earns strictly positive marginal utility from holding capital, capital accumulation is enhanced far beyond the level in the baseline, increasing the marginal productivity of labor, raising labor demand and lowering the unemployment rate percentage points in years, which is clearly counterfactual. furthermore, the investment to output ratio increases percent over this period, which contrasts with the percent decline both in the data and in our baseline model. finally, the greater incentive to accumulate physical capital generates far greater income for wealthy households, creating the rise of credit-to-gdp ratio that greately overshoots the level observed in the data. they also investigated changing the motivations of the agents w: another popular narrative behind the rise of credit accumulation is the “keeping-up-with-the-joneses” preferences for borrowers. this narrative argues that it was the borrowers’ desire to catch up with the lifestyle of the wealthy households, even when their income stagnated, that explains the rise of the household sector leverage ratio. to test this narrative, we modify the preferences of agent w such that the reference point in their external habit is agent k’s consumption level, which is larger than agent w’s consumption level by construction, as agents w are the poorest agents in the model. we find that if keeping-up-with- the-joneses preferences were the main driver of the credit expansion, credit-to-gdp ratio rises percentage points in years, a substantially higher increase than the one observed in the baseline and also larger than in the data. however, such overshooting helps match the rise in the probability of financial crises. for this reason, we cannot preclude the possibility that the demand factor known as “keeping-up-with-the-joneses” is one of the factors behind the rises of household leverage and financial instability. if agents w are reluctant to borrow, the returns from the lending by agents k will be lower. so it is in the interests of agents k to (a) increase the prices of essential goods, and/or (b) increase the desire of agents w to purchase inessential goods. this is where advertising comes in, to enhance the “keeping-up-with-the-joneses” effect. fortunately, for most americans the "joneses" are not the top % of agents w who they only ever see on tv, but their slighly better-off neighbors. the altered model probably exaggerates the effect significantly. so the six bad effects are caused by the increasing market power of agents k, and thus their ability to persuade agents w to borrow from them. what to do to reduce them? we introduce a redistribution policy to our baseline model that consists of a dividend income tax for agent k and social security spending for agent w. this taxation is non-distortionary in our economy, as the tax rate does not interfere with production decisions. our results show that a policy of gradually increasing the tax rate from zero to percent over the last years might have been effective in preventing almost percent of buildup in income inequality, credit growth and the increase in the endogenous probability of financial crisis. since the taxation leaves production efficiency intact, the secular decline in labor share is left intact while the increase in income inequality is substantially subdued. this suggests that carefully designed redistribution policies can be quite effective macroprudential policy tools and more research is warranted in this area. "macroprudential policy" in this context has meant trying to avoid the regular financial crises that mean the taxpayer bails out the banks, and then endures years of "austerity" allegedly to repair the nation's finances so they will be ready for the next crisis. typically this has involved showering the banks with free money in return for a promise not to indulge in such risky behavior again until enough time has passed for people to forget where their money went. this has worked less well than one might have hoped. the federal reserve authors' radical, socialist suggestion of imposing a tax on agents k and spending the proceeds on stuff that agents w need, like health care, low-cost housing, public transit, clean air and water, unemployment insurance, public education, less murderous police and so on is so not going to happen. but if it did, the authors suggest it just might work: the taxation does not affect the wealth of nation, it simply breaks the link between the decline of the labor income share and the increase in income inequality. it does so by redistributing income from agents k to agents w with no significant changes in product and labor market equilibrium. this experiment has important implications for macroprudential policies. since the gfc, most of the focus of macroprudential policies has been on building the resilience of financial intermediaries by bolstering their capital positions, restricting their risk exposures, and restraining excessive interconnectedness among them. these policies are useful in maintaining financial stability. however, these policies might not address a much more fundamental issue: why is there so much income “to be intermediated” to begin with? in our framework, the root cause of financial instability is the income inequality driven by changes in market structure and institutional changes that reward the groups at the top of the income distribution. our experiment suggests that if an important goal for public policy is to limit the probability of a tail event, such as a financial crisis, a powerful macroprudential policy may be a redistribution policy that moderates the rise in income inequality. in other words, finanicial crises are caused by agents k having so much money sloshing around the financial system compared to the supply of productive investments (restricted by the fact the agents w can't afford to borrow to purchase the products) that the excess money has to be placed in investments so risky that regular crises are guaranteed. the authors' tax diverts the excess to agents w, reducing the risk level because they spend it and thus increase the supply of productive, less risky investments. neat, huh? notice that the authors do not propose anti-trust measures; their model continues to increase the market power of agents k. but their redistributive tax ameliorates some of the bad effects of monopolization. . matt stoller noticed the paper and wrote monopolization as a challenge for both parties: basically, people who produce things for a living don’t make as much money, and people who serve as monopoly middlemen make more money, and then the monopolists lend to the producers, creating a society built on asymmetric power relationships and unstable debt. this paper joins a host of other research coming out in recent years on the perils of concentration. for instance, on the labor front, one study showed that concentration costs the average american household $ , a year in lost purchasing power. another showed that since , markups—how much companies charge for products beyond their production costs—have tripled from percent to percent due to growing consolidation. another revealed that median annual compensation—now only $ , —would be over $ , higher if employers were less concentrated. concentration doesn’t just hit wages. monopolization hits innovation, small business formation, and regional inequality. hospitals in concentrated markets have higher mortality rates, and concentration of lab capacity in the hands of labcorp and quest diagnostics is likely even behind the covid testing shortage. the problems induced by monopolization are virtually endless, because fundamentally corporate monopolies are a mechanism to strip people of power and liberty, and people without power and liberty do not flourish. stoller seems not to have noticed that cairó & sim's model is not about reducing monopolization, but about ameliorating its bad effects. but he is right about the awkward politics of reducing it: monopolization doesn’t fit neatly into any partisan box, because structuring markets is not about taxing and spending; it is about what happens before the tax system starts dealing with profits and revenue. it is about avoiding the need to spend on social welfare by preventing the impoverishment in the first place. since the s, american policymakers in both parties have believed in a philosophy in which markets are natural forums where buyers and sellers congregate, ideally free of politics. they stopped paying attention to the details of markets, because doing so was irrelevant to the goal of leaving market actors as remote as possible from the meddling hand of government. such a view is profoundly at odds with the bipartisan american anti-monopoly tradition, a tradition in which most merchants and workers from the th century onward understood markets and chartered corporations as creatures of public policy organized for the convenience and liberty of the many. this shift fifty years ago had profound consequences. leaders in both parties have come to believe that larger corporations are generally a good thing, as they reflect more efficient operations instead of reflecting the rise of market power enabled by policy choices. posted by david. at : am labels: anti-trust comment: david. said... paul krugman explains the contrast between pessimism about the economy and soaring tech stocks in this interesting thread: "and of course that's a good description of the tech giants whose stocks have soared most. so a good guess is that at least part of what's going on is that long-term pessimism has reduced interest rates, and this has *increased* the value of stocks issued by monopolists" september , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ▼  august ( ) lack of anti-trust enforcement optical media durability: update atlantic council report on software supply chains "good" news for bitcoin! contextual vs. behavioral advertising ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. disruptive library technology jester skip to primary navigation skip to content skip to footer disruptive library technology jester about resume toggle search toggle menu peter murray library technologist, open source advocate, striving to think globally while acting locally follow columbus, ohio email mastodon twitter keybase / pgp key github linkedin stackoverflow orcid email recent posts dltj now uses webmention and bridgy to aggregate social media commentary posted on july , minute read when i converted this blog from wordpress to a static site generated with jekyll in , i lost the ability for readers to make comments. at the time, i t... digital repository software: how far have we come? how far do we have to go? posted on june , minute read bryan brown’s tweet led me to ruth kitchin tillman’s repository ouroboros post about the treadmill of software development/deployment. and wow do i have th... thoughts on growing up posted on may , less than minute read it ‘tis the season for graduations, and this year my nephew is graduating from high school. my sister-in-law created a memory book—”a surprise book of advice... more thoughts on pre-recording conference talks posted on april , minute read over the weekend, i posted an article here about pre-recording conference talks and sent a tweet about the idea on monday. i hoped to generate discussion abo... should all conference talks be pre-recorded? posted on april , minute read the code lib conference was last week. that meeting used all pre-recorded talks, and we saw the benefits of pre-recording for attendees, presenters, and con... previous … next enter your search term... twitter github feed © peter murray. powered by jekyll & minimal mistakes. dshr's blog: venture capital isn't working dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, april , venture capital isn't working i was an early employee at three vc-funded startups from the s and s. all of them ipo-ed and two (sun microsystems and nvidia) made it into the list of the top us companies by market capitalization. so i'm in a good position to appreciate jeffrey funk's must-read the crisis of venture capital: fixing america’s broken start-up system. funk starts: despite all the attention and investment that silicon valley’s recent start-ups have received, they have done little but lose money: uber, lyft, wework, pinterest, and snapchat have consistently failed to turn profits, with uber’s cumulative losses exceeding $ billion. perhaps even more notorious are bankrupt and discredited start-ups such as theranos, luckin coffee, and wirecard, which were plagued with management failures, technical problems, or even outright fraud that auditors failed to notice. what’s going on? there is no immediately obvious reason why this generation of start-ups should be so financially disastrous. after all, amazon incurred losses for many years, but eventually grew to become one of the most profitable companies in the world, even as enron and worldcom were mired in accounting scandals. so why can’t today’s start-ups also succeed? are they exceptions, or part of a larger, more systemic problem? below the fold, some reflections on funk's insightful analysis of the "larger, more systemic problem". funk introduces his argument thus: in this article, i first discuss the abundant evidence for low returns on vc investments in the contemporary market. second, i summarize the performance of start-ups founded twenty to fifty years ago, in an era when most start-ups quickly became profitable, and the most successful ones rapidly achieved top- market capitalization. third, i contrast these earlier, more successful start-ups with silicon valley’s current set of “unicorns,” the most successful of today’s start-ups. fourth, i discuss why today’s start-ups are doing worse than those of previous generations and explore the reasons why technological innovation has slowed in recent years. fifth, i offer some brief proposals about what can be done to fix our broken start-up system. systemic problems will require systemic solutions, and thus major changes are needed not just on the part of venture capitalists but also in our universities and business schools. is there a problem? funk's argument that there is a problem can be summarized thus: the returns on vc investments over the last two decades haven't matched the golden years of the proceeding two decades. in the golden years startups made profits. now they don't. vc returns are sub-par source this graph from a morgan stanley report shows that during the s the returns from vc investments greatly exceeded the returns from public equity. but since then the median vc return has been below that of public equity. this doesn't reward investors for the much higher risk of vc investments. the weighted average vc return is slightly above that of public equity because, as funk explains: a small percentage of investments does provide high returns, and these high returns for top-performing vc funds persist over subsequent quarters. although this data does not demonstrate that select vcs consistently earn solid profits over decades, it does suggest that these vcs are achieving good returns. it was always true that vc quality varied greatly. i discussed the advantages of working with great vcs in kai li's fast keynote: work with the best vc funds. the difference between the best and the merely good in vcs is at least as big as the difference between the best and the merely good programmers. at nvidia we had two of the very best, sutter hill and sequoia. the result is that, like kai but unlike many entrepreneurs, we think vcs are enormously helpful. one thing that was striking about working with sutter hill was how many entrepreneurs did a series of companies with them, showing that both sides had positive experiences. startups used to make profits before the dot-com boom, there used to be a rule that in order to ipo a company, it had to be making profits. this was a good rule, since it provided at least some basis for setting the stock price at the ipo. funk writes: there was a time when venture capital generated big returns for investors, employees, and customers alike, both because more start-ups were profitable at an earlier stage and because some start-ups achieved high market capitalization relatively quickly. profits are an important indicator of economic and technological growth, because they signal that a company is providing more value to its customers than the costs it is incurring. a number of start-ups founded in the late twentieth century have had an enormous impact on the global economy, quickly reaching both profitability and top- market capitalization. among these are the so-called faanmg (facebook, amazon, apple, microsoft, netflix, and google), which represented more than percent of the s&p’s total market capitalization and more than percent of the increase in the s&p’s total value at one point—in other words, the most valuable and fastest-growing compa­nies in america in recent years. funk's table shows the years to profitability and years to top- market capitalization for companies founded between and . i'm a bit skeptical of the details because, for example, the table says it took sun microsystems years to turn a profit. i'm pretty sure sun was profitable at its ipo, years from its founding. note funk's stress on achieving profitability quickly. an important silicon valley philosophy used to be: success is great! failure is ok. not doing either is a big problem. the reason lies in the silicon valley mantra of "fail fast". most startups fail, and the costs of those failures detract from the returns of the successes. minimizing the cost of failure, and diverting the resource to trying something different, is important. unicorns, not so much what are these unicorns? wikipedia tells us: in business, a unicorn is a privately held startup company valued at over $ billion. the term was coined in by venture capitalist aileen lee, choosing the mythical animal to represent the statistical rarity of such successful ventures. back in unicorns were indeed rare, but as wikipedia goes on to point out: according to cb insights, there are over unicorns as of october . unicorns are breeding like rabbits, but the picture funk paints is depressing: in the contemporary start-up economy, “unicorns” are purportedly “disrupting” almost every industry from transportation to real estate, with new business software, mobile apps, consumer hardware, internet services, biotech, and ai products and services. but the actual performance of these unicorns both before and after the vc exit stage contrasts sharply with the financial successes of the previous generation of start-ups, and suggests that they are dramatically overvalued. figure shows the profitability distribution of seventy-three unicorns and ex-unicorns that were founded after and have released net income and revenue figures for and/or . in , only six of the seventy-three unicorns included in figure were profitable, while for , seven of seventy were. hey, they're startups, right? they just need time to become profitable. funk debunks that idea too: furthermore, there seems to be little reason to believe that these unprofitable unicorn start-ups will ever be able to grow out of their losses, as can be seen in the ratio of losses to revenues in versus the founding year. aside from a tiny number of statistical outliers ... there seems to be little relationship between the time since a start-up’s founding and its ratio of losses to revenues. in other words, age is not correlated with profits for this cohort. funk goes on to note that startup profitability once public has declined dramatically, and appears inversely related to ipo valuation: when compared with profitability data from decades past, recent start-ups look even worse than already noted. about percent of the unicorn start-ups included in figure were profitable, much lower than the percent of start-ups founded in the s that were profitable, according to jay ritter’s analysis, and also below the overall percentage for start-ups today ( percent). thus, not only has profitability dramatically dropped over the last forty years among those start-ups that went public, but today’s most valuable start-ups—those valued at $ billion or more before ipo—are in fact less profitable than start-ups that did not reach such lofty pre-ipo valuations. funk uses electric vehicles and biotech to illustrate startup over-valuation: for instance, driven by easy money and the rapid rise of tesla’s stock, a group of electric vehicle and battery suppliers—canoo, fisker automotive, hyliion, lordstown motors, nikola, and quantumscape—were valued, combined, at more than $ billion at their listing. likewise, dozens of biotech firms have also achieved billions of dollars in market capitalizations at their listings. in total, set a new record for the number of companies going public with little to no revenue, easily eclipsing the height of the dot-com boom of telecom companies in . the alphaville team have been maintaining a spreadsheet of the ev bubble. they determined that there was no way these companies' valuations could be justified given the size of the potential market. jamie powell's april th revisiting the ev bubble spreadsheet celebrates their assessment: at pixel time the losses from their respective peaks from all of the electric vehicle, battery and charging companies on our list total some $ bn of market capitalisation, or a fall of just under per cent. ouch. what is causing the problem this all looks like too much money chasing too few viable startups, and too many me-too startups chasing too few total available market dollars. funk starts his analysis of the causes of poor vc returns by pointing to the obvious one, one that applies to any successful investment strategy. its returns will be eroded over time by the influx of too much money: there are many reasons for both the lower profitability of start-ups and the lower returns for vc funds since the mid to late s. the most straightforward of these is simply diminishing returns: as the amount of vc investment in the start-up market has increased, a larger proportion of this funding has necessarily gone to weaker opportunities, and thus the average profitability of these investments has declined. but the effect of too much money is even more corrosive. i'm a big believer in bill joy's law of startups — "success is inversely proportional to the amount of money you have". too much money allows hard decisions to be put off. taking hard decisions promptly is key to "fail fast". nvidia was an example of this. the company was founded in one of silicon valley's recurring downturns. we were the only hardware company funded in that quarter. we got to working silicon on a $ . m a round. think about it — each of our vcs invested $ . m to start a company currently valued at $ , m. despite delivering ground-breaking performance, as i discussed in hardware i/o virtualization, that chip wasn't a success. but it did allow jen-hsun huang to raise another $ . m. he down-sized the company by / and got to working silicon of the highly successful second chip with, iirc, six weeks' money left in the bank. funk then discusses a second major reason for poor performance: a more plausible explanation for the relative lack of start-up successes in recent years is that new start-ups tend to be acquired by large incumbents such as the faamng companies before they have a chance to achieve top market capitalization. for instance, youtube was founded in and instagram in ; some claim they would be valued at more than $ billion each (pre-lockdown estimates) if they were independent companies, but instead they were acquired by google and facebook, respectively. in this sense, they are typical of the recent trend: many start-ups founded since were subsequently acquired by faamng, including new social media companies such as github, linkedin, and whatsapp. likewise, a number of money-losing start-ups have been acquired in recent years, most notably deepmind and nest, which were bought by google. but he fails to note the cause of the rash of acquisitions, which is clearly the total lack of anti-trust enforcement in the us. as with too much money, the effects of this lack are more pernicious than at first appears. again, nvidia provides an example. just like the founders and vcs of sun, when we started nvidia we knew that the route to an ipo and major return on investment involved years and several generations of product. so, despite the limited funding and with the full support of our vcs, we took several critical months right at the start to design an architecture for a family of successive chip generations based on hardware i/o virtualization. by ensuring that the drivers in application software interacted only with virtual i/o resources, the architecture decoupled the hardware and software release cycles. the strong linkage between them at sun had been a consistent source of schedule slip. the architecture also structured the implementation of the chip as a set of modules communicating via an on-chip network. each module was small enough that a three-person team could design, simulate and verify it. the restricted interface to the on-chip network meant that, if the modules verified correctly, it was highly likely that the assembled chip would verify correctly. laying the foundations for a long-term product line in this way paid massive dividends. after the second chip, nvidia was able to deliver a new chip generation every months like clockwork. months after we started nvidia, we knew over other startups addressing the same market. only one, ati, survived the competition with nvidia's -month product cycle. vcs now would be hard to persuade that the return on the initial time and money to build a company that could ipo years later would be worth it when compared to lashing together a prototype and using it to sell the company to one of the faanmgs. in many cases, simply recruiting a team that could credibly promise to build the prototype would be enough for an "aqui-hire", where a faanmg buys a startup not for the product but for the people. building the foundation for a company that can ipo and make it into the top- market cap list is no longer worth the candle. but funk argues that the major cause of lower returns is this: overall, the most significant problem for today’s start-ups is that there have been few if any new technologies to exploit. the internet, which was a breakthrough technology thirty years ago, has matured. as a result, many of today’s start-up unicorns are comparatively low-tech, even with the advent of the smartphone—perhaps the biggest technological breakthrough of the twenty-first century—fourteen years ago. ridesharing and food delivery use the same vehicles, drivers, and roads as previous taxi and delivery services; the only major change is the replacement of dispatchers with smartphones. online sales of juicers, furniture, mattresses, and exercise bikes may have been revolutionary twenty years ago, but they are sold in the same way that amazon currently sells almost everything. new business software operates from the cloud rather than onsite computers, but pre- start-ups such as amazon, google, and oracle were already pursuing cloud computing before most of the unicorns were founded. remember, sun's slogan in the mid s was "the network is the computer"! virtua fighter on nv in essence, funk argues that succssful startups out-perform by being quicker than legacy companies to exploit the productivity gains made possible by a technological discontinuity. nvidia was an example of this, too. the technological discontinuity was the transition of the pc from the isa to the pci bus. it wasn't possible to do d games over the isa bus, it lacked the necessary bandwidth. the increased bandwidth of the first version of the pci bus made it just barely possible, as nvidia's first chip demonstrated by running sega arcade games at full frame rate. the advantages startups have against incumbents include: an experienced, high-quality team. initial teams at startups are usually recruited from colleagues, so they are used to working together and know each other's strengths and weaknesses. jen-hsun huang was well-known at sun, having been the application engineer for lsi logic on sun's first sparc implementation. the rest of the initial team at nvidia had all worked together building graphics chips at sun. as the company grows it can no longer recruit only colleagues, so usually experiences what at sun was called the "bozo invasion". freedom from backwards compatibility constraints. radical design change is usually needed to take advantage of a technological discontinuity. reconciling this with backwards compatibility takes time and forces compromise. nvidia was able to ignore the legacy of program i/o from the isa bus and fully exploit the direct memory access capability of the pci bus from the start. no cash cow to defend. the ibm-funded andrew project at cmu was intended to deploy what became the ibm pc/rt, which used the romp, an ibm risc cpu competing with sun's sparc. the romp was so fast that ibm's other product lines saw it as a threat, and insisted that it be priced not to under-cut their existing product's price/performance. so when it finally launched, its price/performance was much worse than sun's sparc-based products, and it failed. funk concludes this section: in short, today’s start-ups have targeted low-tech, highly regulated industries with a business strategy that is ultimately self-defeating: raising capital to subsidize rapid growth and securing a competitive position in the market by undercharging consumers. this strategy has locked start-ups into early designs and customer pools and prevented the experimentation that is vital to all start-ups, including today’s unicorns. uber, lyft, doordash, and grubhub are just a few of the well-known start-ups that have pursued this strategy, one that is used by almost every start-up today, partly in response to the demands of vc investors. it is also highly likely that without the steady influx of capital that subsidizes below-market prices, demand for these start-ups’ services would plummet, and thus their chances of profitability would fall even further. in retrospect, it would have been better if start-ups had taken more time to find good, high-tech business opportunities, had worked with regulators to define appropriate behavior, and had experimented with various technologies, designs, and markets, making a profit along the way. but, if the key to startup success is exploiting a technological discontinuity, and there haven't been any to exploit, as funk argues earlier, taking more time to "find good, high-tech business opportunities" wouldn't have helped. they weren't there to be found. how to fix the problem? funk quotes charles duhigg skewering the out-dated view of vcs: for decades, venture capitalists have succeeded in defining themselves as judicious meritocrats who direct money to those who will use it best. but examples like wework make it harder to believe that v.c.s help balance greedy impulses with enlightened innovation. rather, v.c.s seem to embody the cynical shape of modern capitalism, which too often rewards crafty middlemen and bombastic charlatans rather than hardworking employees and creative businesspeople. and: venture capitalists have shown themselves to be far less capable of commercializing breakthrough technologies than they once were. instead, as recently outlined in the new yorker, they often seem to be superficial trend-chasers, all going after the same ideas and often the same entrepreneurs. one managing partner at softbank summarized the problem faced by vc firms in a marketplace full of copycat start-ups: “once uber is founded, within a year you suddenly have three hundred copycats. the only way to protect your company is to get big fast by investing hundreds of millions.” vcs like these cannot create the technological discontinuities that are the key to adequate returns on investment in startups: we need venture capitalists and start-ups to create new products and new businesses that have higher productivity than do existing firms; the increased revenue that follows will then enable these start-ups to pay higher wages. the large productivity advantages needed can only be achieved by developing breakthrough technologies, like the integrated circuits, lasers, magnetic storage, and fiber optics of previous eras. and different players—vcs, start-ups, incumbents, universities—will need to play different roles in each in­dustry. unfortunately, none of these players is currently doing the jobs required for our start-up economy to function properly. business schools success in exploiting a technological discontinuity requires understanding of, and experience with, the technology, its advantages and its limitations. but funk points out that business schools, not being engineering schools, need to devalue this requirement. instead, they focus on "entrepreneurship": in recent decades, business schools have dramatically increased the number of entrepreneurship programs—from about sixteen in to more than two thousand in —and have often marketed these programs with vacuous hype about “entrepreneurship” and “technology.” a recent stanford research paper argues that such hype about entrepreneurship has encouraged students to become entrepreneurs for the wrong reasons and without proper preparation, with universities often presenting entrepreneurship as a fun and cool lifestyle that will enable them to meet new people and do interesting things, while ignoring the reality of hard and demanding work necessary for success. one of my abiding memories of nvidia is tench coxe, our partner at sutter hill, perched on a stool in the lab playing the "road rash" video game about am one morning as we tried to figure out why our first silicon wasn't working. he was keeping an eye on his investment, and providing a much-needed calming influence. focus on entrepreneurship means focus on the startup's business model not on its technology: a big mistake business schools make is their unwavering focus on business model over technology, thus deflecting any probing questions students and managers might have about what role technological breakthroughs play and why so few are being commercialized. for business schools, the heart of a business model is its ability to capture value, not the more important ability to create value. this prioritization of value capture is tied to an almost exclusive focus on revenue: whether revenues come from product sales, advertising, subscriptions, or referrals, and how to obtain these revenues from multiple customers on platforms. value creation, however, is dependent on technological improvement, and the largest creation of value comes from breakthrough technologies such as the automobile, microprocessor, personal computer, and internet commerce. the key to "capturing value" is extracting value via monopoly rents. the way to get monopoly rents is to subsidize customer acquisition and buy up competitors, until the customers have no place to go. this doesn't create any value. in fact once the monopolist has burnt through the investor's money they find they need a return that can only be obtained by raising prices and holding the customer to ransom, destroying value for everyone. it is true a startup that combines innovation in technology with innovation in business has an advantage. once more, nvidia provides an example. before starting nvidia, jen-hsun huang had run a division of lsi logic that traded access to lsi logic's fab for equity in the chips it made. based on this experience on the supplier side of the fabless semiconductor business, one of his goals for nvidia was to re-structure the relationship between the fabless company and the fab to be more of a win-win. nvidia ended up as one of the most successful fabless companies of all time. but note that the innovation didn't affect nvidia's basic business model — contract with fabs to build gpus, and sell them to pc and graphics board companies. a business innovation combined with technological innovation stands a chance of creating a big company; a business innovation with no technology counterpart is unlikely to. research funk assigns much blame for the lack of breakthrough technologies to universities: university engineering and science programs are also failing us, because they are not creating the breakthrough technologies that america and its start-ups need. although some breakthrough technologies are assembled from existing components and thus are more the responsibility of private companies—for instance, the iphone—universities must take responsibility for science-based technologies that depend on basic research, technologies that were once more common than they are now. note that funk accepts as a fait accompli the demise of corporate research labs, which certainly used to do the basic research that led not just to funk's examples of "semiconductors, lasers, leds, glass fiber, and fiber optics", but also, for example, to packet switching, and operating systems such as unix. as i did three years ago in falling research productivity, he points out that increased government and corporate funding of university research has resulted in decreased output of breakthrough technologies: many scientists point to the nature of the contemporary university research system, which began to emerge over half a century ago, as the problem. they argue that the major breakthroughs of the early and mid-twentieth century, such as the discovery of the dna double helix, are no longer possible in today’s bureaucratic, grant-writing, administration-burdened university. ... scientific merit is measured by citation counts and not by ideas or by the products and services that come from those ideas. thus, labs must push papers through their research factories to secure funding, and issues of scientific curiosity, downstream products and services, and beneficial contributions to society are lost. funk's analysis of the problem is insightful, but i see his ideas for fixing university research as simplistic and impractical: a first step toward fixing our sclerotic university research system is to change the way we do basic and applied research in order to place more emphasis on projects that may be riskier but also have the potential for greater breakthroughs. we can change the way proposals are reviewed and evaluated. we can provide incentives to universities that will encourage them to found more companies or to do more work with companies. funk clearly doesn't understand how much university research is already funded by companies, and how long attempts to change the reward system in universities have been crashing into the rock comprised of senior faculty who achieved their position through the existing system. he is more enthusiastic but equally misled about how basic research in corporate labs could be revived: one option is to recreate the system that existed prior to the s, when most basic research was done by companies rather than universities. this was the system that gave us transistors, lasers, leds, magnetic storage, nuclear power, radar, jet engines, and polymers during the s and s. ... unlike their predecessors at bell labs, ibm, ge, motorola, dupont, and monsanto seventy years ago, top university scientists are more administrators than scientists now—one of the greatest mis­uses of talent the world has ever seen. corporate labs have smaller administrative workloads because funding and promotion depend on informal discussions among scientists and not extensive paperwork. not understanding the underlying causes of the demise of corporate research labs, funk reaches for the time-worm nostrums of right-wing economists, "tax credits and matching grants": we can return basic research to corporate labs by providing much stronger incentives for companies—or cooperative alliances of companies—to do basic research. a scheme of substantial tax credits and matching grants, for instance, would incentivize corporations to do more research and would bypass the bureaucracy-laden federal grant process. this would push the management of detailed technological choices onto scientists and engineers, and promote the kind of informal discussions that used to drive decisions about technological research in the heyday of the early twentieth century. the challenge will be to ensure these matching funds and tax credits are in fact used for basic research and not for product development. requiring multiple companies to share research facilities might be one way to avoid this danger, but more research on this issue is needed. in last year's the death of corporate research labs i discussed a really important paper from a year earlier by arora et al, the changing structure of american innovation: some cautionary remarks for economic growth, which funk does not cite. i wrote: arora et al point out that the rise and fall of the labs coincided with the rise and fall of anti-trust enforcement: historically, many large labs were set up partly because antitrust pressures constrained large firms’ ability to grow through mergers and acquisitions. in the s, if a leading firm wanted to grow, it needed to develop new markets. with growth through mergers and acquisitions constrained by anti-trust pressures, and with little on offer from universities and independent inventors, it often had no choice but to invest in internal r&d. the more relaxed antitrust environment in the s, however, changed this status quo. growth through acquisitions became a more viable alternative to internal research, and hence the need to invest in internal research was reduced. lack of anti-trust enforcement, pervasive short-termism, driven by wall street's focus on quarterly results, and management's focus on manipulating the stock price to maximize the value of their options killed the labs: large corporate labs, however, are unlikely to regain the importance they once enjoyed. research in corporations is difficult to manage profitably. research projects have long horizons and few intermediate milestones that are meaningful to non-experts. as a result, research inside companies can only survive if insulated from the short-term performance requirements of business divisions. however, insulating research from business also has perils. managers, haunted by the spectre of xerox parc and dupont’s “purity hall”, fear creating research organizations disconnected from the main business of the company. walking this tightrope has been extremely difficult. greater product market competition, shorter technology life cycles, and more demanding investors have added to this challenge. companies have increasingly concluded that they can do better by sourcing knowledge from outside, rather than betting on making game-changing discoveries in-house. it is pretty clear that "tax credits and matching grants" aren't the fix for the fundamental anti-trust problem. not to mention that the idea of "requiring multiple companies to share research facilities" in and of itself raises serious ant-trust concerns. after such a good analysis, it is disappointing that funk's recommendations are so feeble. we have to add inadequate vc returns and a lack of startups capable of building top- companies to the long list of problems that only a major overhaul of anti-trust enforcement can fix. lina khan's nomination to the ftc is a hopeful sign that the biden adminstration understands the urgency of changing direction, but biden's hesitation about nominating the doj's anti-trust chief is not. update: michael cembalest's food fight: an update on private equity performance vs public equity markets has a lot of fascinating information about private equity in general and venture capital in particular. his graphs comparing moic (multiple of invested capital) and irr (internal rate of return) across vintage years support his argument that: we have performance data for venture capital starting in the mid- s, but the period is so distorted by the late ’s boom and bust that we start our vc performance discussion in . in my view, the massive gains earned by vc managers in the mid- s are not relevant to a discussion of vc investing today. as with buyout managers, vc manager moic and irr also tracked each other until after which a combination of subscription lines and faster distributions led to rising irrs despite falling moics. there’s a larger gap between average and median manager results than in buyout, indicating that there are a few vc managers with much higher returns and/or larger funds that pull up the average relative to the median. the gap is pretty big: vc managers have consistently outperformed public equity markets when looking at the “average” manager. but to reiterate, the gap between average and median results are substantial and indicate outsized returns posted by a small number of vc managers. for vintage years to , the median vc manager actually underperformed the s&p pretty substantially. another of cembalest's fascinating graphs addresses this question: one of the other “food fight” debates relates to pricing of venture-backed companies that go public. in other words, do venture investors reap the majority of the benefits, leaving public market equity investors “holding the bag”? actually, the reverse has been true over the last decade when measured in terms of total dollars of value creation accruing to pre- and post-ipo investors: post-ipo investor gains have often been substantial. to show this: we analyzed all us tech, internet retailing and interactive media ipos from to . we computed the total value created since each company’s founding, from original paid-in capital by vcs to its latest market capitalization. we then examined how total value creation has accrued to pre- and post-ipo investors . sometimes both investor types share the gains, and sometimes one type accrues the vast majority of the gains. pre-ipo investors earn the majority of the pie when ipos collapse or flat-line after being issued, and post-ipo investors reap the majority of the pie when ipos appreciate substantially after being issued. there are three general regions in the chart. as you can see, the vast majority of the ipos analyzed resulted in a large share of the total value creation accruing to public market equity investors; nevertheless, there were some painful exceptions (see lower left region on the chart). posted by david. at : am labels: anti-trust, intellectual property, venture capital comments: blissex said... it is as usual a very informative and interesting post, but for me the main cause of unprofitable unicorns isthe example of the first years of *amazon*, and the second cause is the general "cash is trash" economic climate, where asset price inflation is very high, so anybody with cash desperately tries to exchange it for assets, even mythical assets like unicorn shares. april , at : am david. said... wikipedia agrees with me and disagrees with funk's figure , stating: "sun was profitable from its first quarter in july " may , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ▼  april ( ) venture capital isn't working dogecoin disrupts bitcoin! what is the point? nfts and web archiving cryptocurrency's carbon footprint elon musk: threat or menace? ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: graphing china's cryptocurrency crackdown dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, july , graphing china's cryptocurrency crackdown below the fold an update to last thursday's china's cryptocurrency crackdown with more recent graphs. mckenzie sigalos reports that bitcoin mining is now easier and more profitable as algorithm adjusts after china crackdown: china had long been the epicenter of bitcoin miners, with past estimates indicating that % to % of the world's bitcoin mining happened there, but a government-led crackdown has effectively banished the country's crypto miners. "for the first time in the bitcoin network's history, we have a complete shutdown of mining in a targeted geographic region that affected more than % of the network," said darin feinstein, founder of blockcap and core scientific. more than % of the hashrate – the collective computing power of miners worldwide – has dropped off the network since its market peak in may. source here is the hashrate graph. it is currently . th/s, down from a peak of . th/s, so down . % from the peak and trending strongly down. we may not have seen the end of the drop. this is good news for bitcoin. the result is that the bitcoin system slowed down: typically, it takes about minutes to complete a block, but feinstein told cnbc the bitcoin network has slowed down to - to -minute block times. and thus, as shown in the difficulty graph, the bitcoin algorithm adjusted the difficulty: this is precisely why bitcoin re-calibrates every blocks, or about every two weeks, resetting how tough it is for miners to mine. on saturday, the bitcoin code automatically made it about % less difficult to mine – a historically unprecedented drop for the network – thereby restoring block times back to the optimal -minute window. source it went from a peak of . t to . t, a drop of . %. this is good news for bitcoin, as sigalos writes: fewer competitors and less difficulty means that any miner with a machine plugged in is going to see a significant increase in profitability and more predictable revenue. "all bitcoin miners share in the same economics and are mining on the same network, so miners both public and private will see the uplift in revenue," said kevin zhang, former chief mining officer at greenridge generation, the first major u.s. power plant to begin mining behind-the-meter at a large scale. assuming fixed power costs, zhang estimates revenues of $ per day for those using the latest-generation bitmain miner, versus $ per day prior to the change. longer-term, although miner income can fluctuate with the price of the coin, zhang also noted that mining revenues have dropped only % from the bitcoin price peak in april, whereas the coin's price has dropped about %. source here is the miners' revenue graph. it went from a peak of $ . m/day on april th to a trough of $ . m/day on june th, a drop of . %. it has since bounced back a little, so this is good news for bitcoin, if not quite as good as zhang thinks. obviously, the trough was before the decrease in difficulty, which subsequently resulted in . btc rewards happening more frequently than before and thus increased miners' revenue somewhat. have you noticed how important it is to check the numbers that the hodl-ers throw around? matt novak reported on june st that: miners in china are now looking to sell their equipment overseas, and it appears many have already found buyers. cnbc’s eunice yoon tweeted early monday that a chinese logistics firm was shipping , lbs ( , kilograms) of crypto mining equipment to an unnamed buyer in maryland for just $ . per kilogram. and sigalos adds details: of all the possible destinations for this equipment, the u.s. appears to be especially well-positioned to absorb this stray hashrate. cnbc is told that major u.s. mining operators are already signing deals to patriate some of these homeless bitmain miners. u.s. bitcoin mining is booming, and has venture capital flowing to it, so they are poised to take advantage of the miner migration, arvanaghi told cnbc. "many u.s. bitcoin miners that were funded when bitcoin's price started rising in november and december of means that they were already building out their power capacity when the china mining ban took hold," he said. "it's great timing." and, as always, the hodl-ers ignore economies of scale and hold out hope for the little guy: but barbour believes that much smaller players in the residential u.s. also stand a chance at capturing these excess miners. "i think this is a signal that in the future, bitcoin mining will be more distributed by necessity," said barbour. "less mega-mines like the + megawatt ones we see in texas and more small mines on small commercial and eventually residential spaces. it's much harder for a politician to shut down a mine in someone's garage." it is good news for bitcoin that more of the mining power is in the us where the us government could suppress it by, for example, declaring that mining is money transmission and thus that pools needed to adhere to the aml/kyc rules. doing so would place the poor little guy in a garage in a dilemma — mine on his own and be unlikely to get a reward before their rig was obsolete, or join an illegal pool and risk their traffic being spotted. update: the malaysian government's crackdown is an example to the world. andrew hayward reports that police destroy , bitcoin miners with big ass steamroller in malaysia. posted by david. at : am labels: bitcoin comments: david. said... the economist covers the crackdown in deep in rural china, bitcoin miners are packing up: "in may, a government committee tasked with promoting financial stability vowed to put a stop to bitcoin mining. within weeks the authorities in four main mining regions—inner mongolia, sichuan, xinjiang and yunnan—ordered the closure of local projects. residents of inner mongolia were urged to call a hotline to report anyone flouting the ban. in parts of sichuan, miners were ordered to clear out computers and demolish buildings housing them overnight. power suppliers pulled the plug on most of them. ... china had accounted for about % of bitcoins earned through mining, according to the cambridge bitcoin electricity consumption index. but analysts think about % of its mining has now ceased. chinese miners are selling their computers at half their value. ... china had accounted for about % of bitcoins earned through mining, according to the cambridge bitcoin electricity consumption index. but analysts think about % of its mining has now ceased. chinese miners are selling their computers at half their value." july , at : am david. said... it isn't just china. kevin shaley's take a look inside this underground crypto mining farm in ukraine with its , playstations and , computers reports that: "a huge underground cryptocurrency mining operation has been busted by ukraine police for allegedly stealing electricity from the grid. police said they'd seized , computers and , games consoles that were being used in the illegal mine, the largest discovered in the country. the mine, in the city of vinnytsia, near kyiv, stole as much as $ , in electricity each month, the security service of ukraine said. to conceal the theft, the operators of the mine used electricity meters that did not reflect their actual energy consumption, officials said." check out the picture! july , at : pm david. said... bitcoin miners break new ground in texas, a state hailed as the new cryptocurrency capital by dalvin brown explains the attraction of texas' electricity, despite unreliability and enormous price spikes: "in the world of crypto mining, having all your computers shut down at once, and stay down for hours, as they did in june, sounds like a disaster. crypto miners compete with one another the world over to generate the computer code that results in the production of a single bitcoin, and the algorithm that governs bitcoin’s production allows only . bitcoin to be produced every minutes, among the perhaps , crypto mines that operate around the world. if you’re not able to generate the code, but your rivals can, you are out of luck. but thanks to the way texas power companies deal with large electricity customers like whinstone, harris’s bitcoin mine, one of the few owned by a publicly traded company, didn’t suffer. instead, the state’s electricity operator, the electric reliability council of texas (ercot), began to pay whinstone — for having agreed to quit buying power amid heightened demand." the good news is that the more mining happens in the us the easier it would be for the us to stop the unstoppable code. july , at : pm david. said... failure to proceed moonwards causes loss of interest, as tanya macheel reports in cryptocurrency trading volume plunges as interest wanes following bitcoin price drop: "trading volumes at the largest exchanges, including coinbase, kraken, binance and bitstamp, fell more than % in june, according to data from crypto market data provider cryptocompare, which cited lower prices and lower volatility as the reason for the drop. in june the price of bitcoin hit a monthly low of $ , , according to the report, and ended the month down %. a daily volume maximum of $ . billion on june was down . % from the intra-month high in may." july , at : am david. said... mackenzie sigalos continues reporting on the bitcoin mining migration in how the u.s. became the world’s new bitcoin mining hub: "well before china decided to kick out all of its bitcoin miners, they were already leaving in droves, and new data from cambridge university shows they were likely headed to the united states. the u.s. has fast become the new darling of the bitcoin mining world. it is the second-biggest mining destination on the planet, accounting for nearly % of all the world’s bitcoin miners as of april . that’s a % increase from september . ... this dataset doesn’t include the mass mining exodus out of china, which led to half the world’s miners dropping offline, and experts tell cnbc that the u.s. share of the mining market is likely even bigger than the numbers indicate. according to the newly-released cambridge data, just before the chinese mining ban began, the country accounted for % of the world’s total hashrate, an industry term used to describe the collective computing power of the bitcoin network. that’s a sharp decline from . % in september , and the percentage is likely much lower given the exodus underway now." july , at : pm david. said... bloomberg reports that china’s central bank says it will keep pressure on crypto market: "china’s central bank vowed to maintain heavy regulatory pressure on cryptocurrency trading and speculation after escalating its clampdown in the sector earlier this year. the people’s bank of china will also supervise financial platform companies to rectify their practices according to regulations, it said in a statement on saturday. policy makers met on friday to discuss work priorities for the second half of the year." august , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ▼  july ( ) economics of evil revisited yet another dna storage technique alternatives to proof-of-work a modest proposal about ransomware intel did a boeing graphing china's cryptocurrency crackdown ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. library user experience community - medium library user experience community - medium a blog and slack community organized around design and the user experience in libraries, non-profits, and the higher-ed web. - medium a library system for the future this is a what-if story.continue reading on library user experience community » alexa, get me the articles (voice interfaces in academia) thinking about interfaces has led me down a path of all sorts of exciting/mildly terrifying ways of interacting with our devices &#x ; from&#x ;continue reading on library user experience community » accessibility information on library websites is autocomplete on your library home page? writing for the user experience with rebecca blakiston first look at primo’s new user interface what users expect write for libux on the user experience of ebooks unambitious and incapable men in librarianship dshr's blog: cryptocurrency's carbon footprint dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, april , cryptocurrency's carbon footprint china’s bitcoin mines could derail carbon neutrality goals, study says and bitcoin mining emissions in china will hit million tonnes by , the headlines say it all. excusing this climate-destroying externality of proof-of-work blockchains requires a continuous flow of new misleading arguments. below the fold i discuss one of the more recent novelties. in bitcoin and ethereum carbon footprints – part , moritz seibert claims the reason for mining is to get the mining reward: bitcoin transactions themselves don’t cause a lot of power usage. getting the network to accept a transaction consumes almost no power, but having asic miners grind through the mathematical ether to solve valid blocks does. miners are incentivized to do this because they are compensated for it. presently, that compensation includes a block reward which is paid in bitcoin ( . btc per block) as well as a miner fee (transaction fee). transaction fees are denominated in fractional bitcoins and paid by the initiator of the transaction. today, about % of total miners’ rewards are transactions fees, and about % are block rewards. so, he argues, bitcoin's current catastrophic carbon footprint doesn't matter because, as the reward decreases, so will the carbon footprint: this also means that the power usage of the bitcoin network won’t scale linearly with the number of transactions as the network becomes predominantly fee-based and less rewards-based (which causes a lot of power to the thrown at it in light of increasing btc prices), and especially if those transactions take place on secondary layers. in other words, taking the ratio of “bitcoin’s total power usage” to “number of transactions” to calculate the “power cost per transaction” falsely implies that all transactions hit the final settlement layer (they don’t) and disregards the fact that the final state of the bitcoin base layer is a fee-based state which requires a very small fraction of bitcoin’s overall power usage today (no more block rewards). seibert has some vague idea that there are implications of this not just for the carbon footprint but also for the security of the bitcoin blockchain: going forward however, miners’ primary revenue source will change from block rewards to the fees paid for the processing of transactions, which don’t per se cause high carbon emissions. bitcoin is set to become be a purely fee-based system (which may pose a risk to the security of the system itself if the overall hash rate declines, but that’s a topic for another article because a blockchain that is fully reliant on fees requires that btcs are transacted with rather than held in michael saylor-style as hodling leads to low btc velocity, which does not contribute to security in a setup where fees are the only rewards for miners.) lets leave aside the stunning irresponsibility of arguing that it is acceptable to dump huge amounts of long-lasting greenhouse gas into the atmosphere now because you believe that in the future you will dump less. how realistic is the idea that decreasing the mining reward will decrease the carbon footprint? the graph shows the history of the hash rate, which is a proxy for the carbon footprint. you can see the effect of the "halvening", when on may th the mining reward halved. there was a temporary drop, but the hash rate resumed its inexorable rise. this experiment shows that reducing the mining reward doesn't reduce the carbon footprint. so why does seibert think that eliminating it will reduce the carbon footprint? the answer appears to be that seibert thinks the purpose of mining is to create new bitcoins, that the reason for the vast expenditure of energy is to make the process of creating new coins secure, and that it has nothing to do with the security of transactions. this completely misunderstands the technology. in the economic limits of bitcoin and the blockchain, eric budish examines the return on investment in two kinds of attacks on a blockchain like bitcoin's. the simpler one is a % attack, in which an attacker controls the majority of the mining power. budish explains what this allows the attacker to do: an attacker could (i) spend bitcoins, i.e., engage in a transaction in which he sends his bitcoins to some merchant in exchange for goods or assets; then (ii) allow that transaction to be added to the public blockchain (i.e., the longest chain); and then subsequently (iii) remove that transaction from the public blockchain, by building an alternative longest chain, which he can do with certainty given his majority of computing power. the merchant, upon seeing the transaction added to the public blockchain in (ii), gives the attacker goods or assets in exchange for the bitcoins, perhaps after an escrow period. but, when the attacker removes the transaction from the public blockchain in (iii), the merchant effectively loses his bitcoins, allowing the attacker to “double spend” the coins elsewhere. such attacks are endemic among the smaller alt-coins; for example there were three successful attacks on ethereum classic in a single month last year. clearly, seibert's future "transaction only" bitcoin must defend against them. there are two ways to mount a % attack, from the outside or from the inside. an outside attack requires more mining power than the insiders are using, whereas an insider attack only needs a majority of the mining power to conspire. bitcoin miners collaborate in "mining pools" to reduce volatility of their income, and for many years it would have taken only three or so pools to conspire for a successful attack. but assuming insiders are honest, outsiders must acquire more mining power than the insiders are using. clearly, bitcoin insiders are using so much mining power that this isn't feasible. the point of mining isn't to create new bitcoins. mining is needed to make the process of adding a block to the chain, and thus adding a set of transactions to the chain, so expensive that it isn't worth it for an attacker to subvert the process. the cost, and thus in the case of proof of work the carbon footprint, is the whole point. as budish wrote: from a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power, ... in contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock. lets consider the possible futures of a fee-based bitcoin blockchain. it turns out that currently fee revenue is a smaller proportion of total miner revenue than seibert claims. here is the chart of total revenue (~$ m/day): and here is the chart of fee revenue (~$ m/day): thus the split is about % fee, % reward: if security stays the same, blocksize stays the same, fees must increase to keep the cost of a % attack high enough. the chart shows the average fee hovering around $ , so the average cost of a single transaction would be over $ . this might be a problem for seibert's requirement that "btcs are transacted with rather than held". if blocksize stays the same, fees stay the same, security must decrease because the fees cannot cover the cost of enough hash power to deter a % attack. similarly, in this case it would be times cheaper to mount a % attack, which would greatly increase the risk of delivering anything in return for bitcoin. it is already the case that users are advised to wait blocks (about an hour) before treating a transaction as final. waiting nearly half a day before finality would probably be a disincentive. if fees stay the same, security stays the same, blocksize must increase to allow for enough transactions so that their fees cover the cost of enough hash power to deter a % attack. since bitcoin blocks have been effectively limited to around mb, and the blockchain is now over one-third of a terabyte growing at over %/yr. increasing the size limit to say mb would solve the long-term problem of a fee-based system at the cost of reducing miners income in the short term by reducing the scarcity value of a slot in a block. doubling the effective size of the block caused a huge controversy in the bitcoin community for precisely this short vs. long conflict, so a much larger increase would be even more controversial. not to mention that the size of the blockchain a year from now would be times bigger imposing additional storage costs on miners. that is just the supply side. on the demand side it is an open question as to whether there would be times the current demand for transactions costing $ and taking an hour which, at least in the us, must each be reported to the tax authorities. short vs. long none of these alternatives look attractive. but there's also a second type of attack in budish's analysis, which he calls "sabotage". he quotes rosenfeld: in this section we will assume q < p [i.e., that the attacker does not have a majority]. otherwise, all bets are off with the current bitcoin protocol ... the honest miners, who no longer receive any rewards, would quit due to lack of incentive; this will make it even easier for the attacker to maintain his dominance. this will cause either the collapse of bitcoin or a move to a modified protocol. as such, this attack is best seen as an attempt to destroy bitcoin, motivated not by the desire to obtain bitcoin value, but rather wishing to maintain entrenched economical systems or obtain speculative profits from holding a short position. short interest in bitcoin is currently small relative to the total stock, but much larger relative to the circulating supply. budish analyzes various sabotage attack cases, with a parameter ∆attack representing the proportion of the bitcoin value destroyed by the attack: for example, if ∆attack = , i.e., if the attack causes a total collapse of the value of bitcoin, the attacker loses exactly as much in bitcoin value as he gains from double spending; in effect, there is no chance to “double” spend after all. ... however, ∆attack is something of a “pick your poison” parameter. if ∆attack is small, then the system is vulnerable to the double-spending attack ... and the implicit transactions tax on economic activity using the blockchain has to be high. if ∆attack is large, then a short time period of access to a large amount of computing power can sabotage the blockchain. the current cryptocurrency bubble ensures that everyone is making enough paper profits from the golden eggs to deter them from killing the goose that lays them. but it is easy to create scenarios in which a rush for the exits might make killing the goose seem like the best way out. seibert's misunderstanding illustrates the fundamental problem with permissionless blockchains. as i wrote in a note on blockchains: if joining the replica set of a permissionless blockchain is free, it will be vulnerable to sybil attacks, in which an attacker creates many apparently independent replicas which are actually under his sole control. if creating and maintaining a replica is free, anyone can authorize any change they choose simply by creating enough sybil replicas. defending against sybil attacks requires that membership in a replica set be expensive. there are many attempts to provide less environmentally damaging ways to make adding a block to a blockchain expensive, but attempts to make adding a block cheaper are self-defeating because they make the blockchain less secure. there are two reasons why the primary use of a permissionless blockchain cannot be transactions as opposed to hodl-ing: the lack of synchronization between the peers means that transactions must necessarily be slow. the need to defend against sybil attacks means either that transactions must necessarily be expensive, or that blocks must be impractically large. posted by david. at : am labels: bitcoin, security comments: david. said... seibert apparently believes (a) that a fee-only bitcoin network would be secure, used for large numbers of transactions, and have a low carbon footprint, and (b) that the network would have a low carbon footprint because most transactions would use the lightning network. ignoring the contradiction, anyone who believes that the lightning network would do the bulk of the transactions needs to read the accounts of people actually trying to transact using it. david gerard writes: "crypto guy loses a bet, and tries to pay the bet using the lightning network. hilarity ensues." indeed, the archived twitter thread from the loser is a laugh-a-minute read. april , at : pm david. said... jaime powell shreds another attempt at cryptocurrency carbon footprint gaslighting in the destructive green fantasy of the bitcoin fanatics: "it is in this context that we should consider the latest “research” from the good folks at etf-house-come-fund manager ark invest and $ bn payment company square. titled “bitcoin is key to an abundant, clean energy future”, it does exactly what you’d expect it to. which is to try justify, after the fact, bitcoin’s insane energy use. why? because both entities are deeply involved in this “space” and now need to a) feel better about themselves and b) guard against people going off crypto on the grounds that it is actually a very bad thing. ... the white paper imagines bitcoin mining being a solution, alongside battery storage, for excess energy. it also imagines that if solar and wind prices continue to collapse, bitcoin could eventually transition to being completely renewable-powered in the future. “imagines” is the key word here. because in reality, bitcoin mining is quite the polluter. it’s estimated that per cent of bitcoin mining is concentrated in china, where nearly two-thirds of all electricity is generated by coal power, according to a recent bank of america report. in fact, mining uses coal power so aggressively that when one coal mine flooded and shut down in xianjiang province over the weekend, one-third of all bitcoin’s computing power went offline." april , at : pm david. said... in jack dorsey and elon musk agree on bitcoin's green credentials the bbc reports on yet another of elon musk's irresponsible cryptocurrency tweets: "the tweet comes soon after the release of a white paper from mr dorsey's digital payment services firm square, and global asset management business ark invest. entitled "bitcoin as key to an abundant, clean energy future", the paper argues that "bitcoin miners are unique energy buyers", because they offer flexibility, pay in a cryptocurrency, and can be based anywhere with an internet connection." the bbc fails to point out that musk and dorsey are "talking their book"; tesla invested $ . b and square $ m in bitcoin. so they have over $ . b reasons to worry about efforts to limit its carbon footprint. april , at : pm david. said... this comment has been removed by the author. april , at : pm david. said... nathan j. robinson's why cryptocurrency is a giant fraud has an interesting footnote, discussing a "pseudoscholarly masterpiece" of bitcoin puffery by vijay boyapati: "interestingly, boyapati cites bitcoin’s high transaction fees as a feature rather than a bug: “a recent criticism of the bitcoin network is that the increase in fees to transmit bitcoins makes it unsuitable as a payment system. however, the growth in fees is healthy and expected… a network with ‘low’ fees is a network with little security and prone to external censorship. those touting the low fees of bitcoin alternatives are unknowingly describing the weakness of these so-called ‘alt-coins.’” as you can see, this successfully makes the case that high fees are unavoidable, but it also undermines the reasons why any sane person would use this as currency rather than a speculative investment." right! a permissionless blockchain has to be expensive to run if it is to be secure. those costs have either to be borne, ultimately, by the blockchain's users, or dumped on the rest of us as externalities (e.g. the blockchain's carbon footprint, the shortage of gpus, ...). april , at : pm david. said... colin chartier's crypto miners are killing free ci points to yet another cryptocurrency externality: "ci providers like layerci, gitlab, travisci, and shippable are all worsening or shutting down their free tiers due to cryptocurrency mining attacks." ci = "continuous integration" april , at : am david. said... drew devault's must-read cryptocurrency is an abject disaster is an even more comprehensive denunciation of cryptocurrency externalities than chartier's. drew concludes: "when you’re the only honest person in the room, maybe you should be in a different room. it is impossible to trust you. every comment online about cryptocurrency is tainted by the fact that the commenter has probably invested thousands of dollars into a ponzi scheme and is depending on your agreement to make their money back. not to mention that any attempts at reform, like proof-of-stake, are viciously blocked by those in power (i.e. those with the money) because of any risk it poses to reduce their bottom line. no, your blockchain is not different. cryptocurrency is one of the worst inventions of the st century. i am ashamed to share an industry with this exploitative grift. it has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere." amen. april , at : pm david. said... jason herring reports on another externality of cryptocurrencies: "around midnight on april , two men armed with handguns forced their way into an apartment in the block of elbow drive s.w., police said. the men tied up the apartment’s resident and stole computers, jewelry and bank cards from the suite. they also took cryptocurrency keys, which allow holders access to digital financial accounts. the men forced the victim to disclose his bank pins, then put him in a storage room and fled." hat tip to david gerard. april , at : pm david. said... bitcoin-mining power plant stirs up controversy by nathaniel mott reports: "the conflict revolves around a power plant on new york's seneca lake called greenidge. the company’s website says the plant was opened in , shuttered in , and purchased by new owners in . those owners started mining bitcoin in the facility in . new york focus reported that greenidge plans “to quadruple the power used to process bitcoin transactions by late next year” as the cryptocurrency’s value soars. environmentalists fear those plans would lead to dangerously high co emissions." may , at : am david. said... after the pump, comes the dump. reuters reports that tesla suspends bitcoin purchases over fossil fuel concerns for mining the cryptocurrency, elon musk confirms. tesla padded their quarterly results with $ m profit from the pump. may , at : pm david. said... in musk: bitcoin is bad for climate (and you can’t buy teslas with it anymore), tim de chant writes: "when purchased using dollars, a new tesla model made and operated in the us produces about . tonnes of carbon dioxide over its lifetime (assuming it's driven about , miles). the price of the same car on march , when musk announced the payment option, would have been around one bitcoin, and at the time, one bitcoin had an estimated footprint of around tonnes. not only does one tesla’s worth of bitcoin pollute significantly more than the car itself, including manufacturing, it also represents more than five times the carbon pollution of an average combustion-engined vehicle in the us. and that’s according to tesla’s own estimate." may , at : am david. said... cnbc reports that china bans financial, payment institutions from cryptocurrency business: "china has banned financial institutions and payment companies from providing services related to cryptocurrency transactions, and warned investors against speculative crypto trading. it was china's latest attempt to clamp down on what was a burgeoning digital trading market. under the ban, such institutions, including banks and online payments channels, must not offer clients any service involving cryptocurrency, such as registration, trading, clearing and settlement, three industry bodies said in a joint statement on tuesday." may , at : pm david. said... cnbc reports on yet another externality of cryptocurrencies in hackers behind colonial pipeline attack reportedly received $ million in bitcoin before shutting down: "elliptic said that darkside's bitcoin wallet contained $ . million worth of the digital currency before its funds were drained last week. there was some speculation that this bitcoin had been seized by the u.s. government. of the $ million total haul, $ . million went to darkside's developer while $ . million went to its affiliates, according to elliptic. the majority of the funds are being sent to crypto exchanges, where they can be converted into fiat money, elliptic said." may , at : pm david. said... based on their experimental proof-of-stake blockchain, carl beekhuizen claims that: "ethereum will be completing the transition to proof-of-stake in the upcoming months, which brings a myriad of improvements that have been theorized for years. but now that the beacon chain has been running for a few months, we can actually dig into the numbers. one area that we’re excited to explore involves new energy-use estimates, as we end the process of expending a country’s worth of energy on consensus. ... in total, a proof-of-stake ethereum therefore consumes something on the order of . megawatt. this is not on the scale of countries, provinces, or even cities, but that of a small town (around american homes)." of course, this assumes that there won't be a fork, with a proof-of-work ethereum traditional continuing. there is already an ethereum classic from a fork in . may , at : pm david. said... jeff keeling reveals yet another cryptocurrency externality in noisy ‘mining’ operation leaves washington county community facing ‘bit’ of a conundrum: "residents of the pastoral new salem community say a bitcoin mining center next to a brightridge power substation has seriously impacted a prized element of their quality of life — peace and quiet. “when we lay down and all, the tv’s off and the kids are in bed, the noise is there,” preston holley, a school teacher whose home is just across lola humphreys road from the site, said. “it’s as plain as day. when wake up to let the dog out it’s running full bore.” cooling fans from the round-the-clock operation off bailey bridge road are so loud they sometimes keep residents up at night. but the massive computing power in what brightridge ceo jeff dykes said is about a $ million operation has made red dog technologies the power distributor’s biggest customer virtually overnight." may , at : am david. said... catalin cimpanu reports that crypto-mining gangs are running amok on free cloud computing platforms: "gangs have been operating by registering accounts on selected platforms, signing up for a free tier, and running a cryptocurrency mining app on the provider’s free tier infrastructure. after trial periods or free credits reach their limits, the groups register a new account and start from the first step, keeping the provider’s servers at their upper usage limit and slowing down their normal operations." as david gerard said: "cryptocurrency decentralization is a performative waste of resources in order to avoid having to trust a government to issue currency. but since cryptocurrencies don’t actually function as currencies, it just generates new types of otherwise worthless magic beans to sell for real money. your system will waste unlimited amounts of whatever resource you’re throwing away—and incentivize the theft of whatever resources other people can waste to turn into money." and the waste doesn't even get you decentralization. may , at : am david. said... jiang et al's policy assessments for the carbon emission flows and sustainability of bitcoin blockchain operation in china projects that, without policy action: "the annualized energy consumption of the bitcoin industry in china will peak in at .  twh based on the benchmark simulation of bbce modeling. this exceeds the total energy consumption level of italy and saudi arabia and ranks th among all countries in . correspondingly, the carbon emission flows of the bitcoin operation would peak at . million metric tons per year in ." may , at : pm david. said... when writing this post i had forgotten that two years ago i wrote about the future of a fee-based bitcoin in the economics of bitcoin transactions. raphael auer's beyond the doomsday economics of “proof-of-work” in cryptocurrencies concludes: "the key takeaway of this paper concerns the interaction of these two limitations: proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees. a simple model suggests that ultimately, it could take nearly a year, or , blocks, before a payment could be considered “final”." may , at : pm david. said... i also forgot that, months ago, i wrote in economic limits of proof-of-stake blockchains: "budish showed that bitcoin was unsafe unless the value of transactions in a block was less than the sum of the mining reward and the fees for the transactions it contains. the mining reward is due to decrease to zero, at which point safety requires fees larger than the value of the transactions, not economically viable. in arvind narayanan's group at princeton published a related instability in carlsten et al's on the instability of bitcoin without the block reward. narayanan summarized the paper in a blog post: 'our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein.' note that: 'we model transaction fees as arriving at a uniform rate. the rate is non-uniform in practice, which is an additional complication.' the rate is necessarily non-uniform, because transactions are in a blind auction for inclusion in the next block, which leads to over-payment." may , at : pm david. said... ethereum claims that if they succeed in transitioning to proof-of-stake their carbon emissions will be greatly reduced. even if this happens, it does not mean that the total carbon emissions of the cryptocurrency world will be reduced. the mining resources used to mine eth using proof-of-work will not be fed to the trash compactor, they will be re-purposed to mine other cryptocurrencies. june , at : pm david. said... mike melanson's this week in programming: crypto miners overrun docker hub’s autobuild reports on the latest free tier of web services to be killed by the cryptocurrency mining gangs. june , at : pm david. said... david gerard explains why bitcoin's blocksize didn't increase: "in mid- , the bitcoin network finally filled its tiny transaction capacity. transactions became slow, expensive and clogged. by october , bitcoin regularly had around , unconfirmed transactions waiting, and in may it peaked at , stuck in the queue. [ft, , free with login] nobody could agree how to fix this, and everyone involved despised each other. the possible solutions were: ) increase the block size. this would increase centralisation even further. (though that ship really sailed in .) ) the lightning network: bolt on a completely different non-bitcoin network, and do all the real transactions there. this only had the minor problem that the lightning network’s design couldn’t possibly fix the problem. ) do nothing. leave the payment markets to use a different cryptocurrency that hasn’t clogged yet. (payment markets, such as the darknets, ended up moving to other coins that worked better.) bitcoin mostly chose option — though is talked up, just as if saying “but, the lighting network!” solves the transaction clog." as i write the average fee is $ . and there are , unconfirmed transactions in the mempool. the network is confirming . transactions/sec, so the mempool is the equivalent of hr min of transaction processing, or almost blocks of backlog. june , at : pm david. said... gretchen morgenson's some locals say a bitcoin mining operation is ruining one of the finger lakes. here's how. reports on yet another externality of cryptocurrencies: "water usage by greenidge is another problem, residents said. the current permit allows greenidge to take in million gallons of water and discharge million gallons daily, at temperatures as high as degrees fahrenheit in the summer and degrees in winter, documents show. rising water temperatures can stress fish and promote toxic algae blooms, the epa says. a full thermal study hasn't been produced and won't be until , but residents protesting the plant say the lake is warmer with greenidge operating." july , at : am david. said... the life of a hodl-er carries significant risks, as olga kharif reports in ethereum co-founder says safety concern has him quitting crypto: 'anthony di iorio, a co-founder of the ethereum network, says he’s done with the cryptocurrency world, partially because of personal safety concerns. di iorio, , has had a security team since , with someone traveling with or meeting him wherever he goes. in coming weeks, he plans to sell decentral inc., and refocus on philanthropy and other ventures not related to crypto. the canadian expects to sever ties in time with other startups he is involved with, and doesn’t plan on funding any more blockchain projects. “it’s got a risk profile that i am not too enthused about,” said di iorio, who declined to disclose his cryptocurrency holdings or net worth. “i don’t feel necessarily safe in this space. if i was focused on larger problems, i think i’d be safer.” ' he can go back to being inconspicuous, despite: "he made a splash in when buying the largest and one of the most expensive condos in canada, paying for it partly with digital money. di iorio purchased the three-story penthouse for c$ million ($ million) at the st. regis residences toronto, the former trump international hotel & tower in the downtown business district." july , at : pm david. said... editordavid's both dogecoin creators are now criticizing cryptocurrencies leads to a wonderful twitter thread by the second of them, jackson palmer: "after years of studying it, i believe that cryptocurrency is an inherently right-wing, hyper-capitalistic technology built primarily to amplify the wealth of its proponents through a combination of tax avoidance, diminished regulatory oversight and artificially enforced scarcity. despite claims of “decentralization”, the cryptocurrency industry is controlled by a powerful cartel of wealthy figures who, with time, have evolved to incorporate many of the same institutions tied to the existing centralized financial system they supposedly set out to replace." july , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ▼  april ( ) venture capital isn't working dogecoin disrupts bitcoin! what is the point? nfts and web archiving cryptocurrency's carbon footprint elon musk: threat or menace? ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. cord- : covid- open research dataset — allen institute for ai ··· about ··· programs ··· projects ··· careers ··· research ··· press ··· ··· home about programs projects careers research press research papersdatavideosdemosleaderboardssoftwarepodcasts research cord- : covid- open research dataset semantic scholar • cord- is a free resource of tens of thousands of scholarly articles about covid- , sars-cov- , and related coronaviruses for use by the global research community. downloadread paperview website license: cord- dataset license ai for the common good email us: ai -info@allenai.org call us: . . follow us: @allen_ai subscribe to the ai newsletter home about mission team founder board of directors scientific advisory board ai blog programs ai israel ai irvine young investigators predoctoral young investigators visiting scholars diversity, equity, & inclusion projects allennlp aristo mosaic prior semantic scholar ai & fairness incubator careers working at ai current openings internships young investigators predoctoral young investigators research papers data videos demos leaderboards software podcasts press news articles press resources newsletters © the allen institute for artificial intelligence - all rights reserved. privacy policy |terms and conditions dshr's blog: kai li's fast keynote dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, february , kai li's fast keynote kai li's keynote at the fast conference was entitled disruptive innovation: data domain experience. data domain was the pioneer of deduplication for backups. i was one of the people sutter hill asked to look at data domain when they were considering a b-round investment in . i was very impressed, not just with their technology, but more with the way it was packaged as an appliance so that it was very easy to sell. the elevator pitch was "it is a box. you plug it into your network. backups work better." i loved kai's talk. not just because i had a small investment in the b round, so he made me money, but more because just about everything he said matched experiences i had at sun or nvidia. below the fold i discuss some of the details. kai quoted dr. geoffrey nicholson: research is the transformation of money into knowledge. innovation is the transformation of knowledge into money. data domain is a stellar example of the second. the outlines of the story are simple. they started in late , raised a total of about $ m in rounds, ipo-ed less than years later at a $ b valuation having spent only $ m, and were acquired years after that at a $ . b valuation. before their ipo they had more than % of the market and more than % gross margin. that is an extraordinary performance. the vision was to replace tape for backup with disk at roughly the same price but much lower space, power, and network costs, and to make restoring from a backup much faster, thereby reducing the operational impact of failures.  the only way to do this was to use deduplication to get a high enough compression factor to swamp the cost per byte difference between disk and tape. kai illustrated their success by showing a line of full racks each containing an ibm tape library that were replaced by u data domain systems. the key to implementing this vision was to bet on long-term technology trends. the two that kai pointed out were that disk had already replaced tape in personal audio (walkman to ipod) and in tv time-shifting (vhs to tivo), and that moore's law had already shifted from faster cpus to more cores. the two major challenges they faced were: they had to sell for no more than a tape system, so their gross margin was directly related to the compression ratio they could achieve. the amount of data to be backed up was doubling every months, but there are only hours in a day, so their throughput needed to at least double every months. the three founders started the company just after / , at a time when no-one was starting companies. we started nvidia in in one of silicon valley's periodic downturns; we were the only semiconductor company to get any funding the quarter of our a round. starting a company when no-one else is - absolutely the best time to do it. kai laid out a list of key precepts, all of which i agree with despite some caveats: build "must have" products. customer driven technology. for nvidia, this was more difficult, since we had customers (pc and board manufacturers) and end-users (game players). work with the best vc funds.  the difference between the best and the merely good in vcs is at least as big as the difference between the best and the merely good programmers. at nvidia we had two of the very best, sutter hill and sequoia. the result is that, like kai but unlike many entrepreneurs, we think vcs are enormously helpful. raise more than we need - give up a lot of equity. one downside of starting a company when the market has gone south is that you have to give up more equity. but you are giving it up to vcs willing to invest when no-one else is, who are the ones you want to work with. and having cash in a downturn gives you the ability to move fast. high standard for early team, even if you miss the hiring plan. after the ipo at sun came the bozo invasion, but then the company was well-enough established to survive it. no egos - take the best ideas wherever they come from. this is often hard for the best people to handle, and is a real test of the initial management. kai's slides plotting revenue and things like lines of code and deduplication throughput on the same  graph were fascinating. they matched closely, with throughput increasing -fold in years, and lines of code growing much faster after the ipo than before. posted by david. at : am labels: deduplication, venture capital comment: david. said... usenix has posted the video and audio of kai's talk. february , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ▼  february ( ) facebook's "cold storage" kai li's fast keynote thoughts from fast amazon's margins rothenberg still wrong ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: cost-reducing writing dna data dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, march , cost-reducing writing dna data in dna's niche in the storage market, i addressed a hypothetical dna storage company's engineers and posed this challenge: increase the speed of synthesis by a factor of a quarter of a trillion, while reducing the cost by a factor of fifty trillion, in less than years while spending no more than $ m/yr. now, a company called catalog plans to demo a significant step in the right direction: the goal of the demonstration, says park, is to store gigabytes, ... in hours, on less than cubic centimeter of dna. and to do it for $ , . that would be e bits for $ e . at the theoretical maximum bits/base, it would be $ . e- per base, versus last year's estimate of e- , or around , times better. if the demo succeeds, it marks a major achievement. but below the fold i continue to throw cold water on the medium-term prospects for dna storage. catalog's technique is different from experiments such as microsoft's, which synthesize dna strands a base at a time when park and roquet formed catalog in , they shunned the idea of assembling bases one by one to represent the digital “alphabet.” ... catalog opted for prefab: it buys or makes fragments of dna, “in massive quantities,” and then assembles with a custom-made liquid-handling robot. “dna molecules are like lego blocks,” says park. “we can string them together in virtually infinite combinations. we take advantage of that and start with a few hundred molecules to generate in the end, trillions of different molecules.” park likens the approach to movable type. instead of having to write out every letter each time you want to write something, old-style typesetters cast their letters in advance, and then slotted them into position. but, despite catalog's technical ingenuity, they face enormous obstacles to market success: $ , /tb is still an extraordinarily expensive storage medium. tb hard disks retail at around $ , so are nearly times cheaper. gb in hours is around . mb/s, compared to the mb/s transfer rate of a single current tb drive. since dna storage has both very slow write and read, and is not rewritable, it is restricted to competing in the archival storage market. it has to be much cheaper than tape and optical media, not just hard disk, before it can compete successfully. catalog's pitch is based on the idea that demand for data storage is insatiable: for a startup, a solution is less important than a solid problem, park told the weinert center’s distinguished entrepreneurs lunch on feb. . and park’s problem – the glut of information sometimes called the “datapocalypse” — is a result of a tsunami of data from pretty much every sphere of human activity. but, as i discussed where did all those bits go?, the actual shipment data for storage vendors shows this is a fallacy. the demand for storage media, like the demand for any good, depends upon the price. at current prices demand for bytes of hard disk is growing steadily but more slowly than the kryder rate, so unit shipments are falling. as i pointed out a year ago in archival media: not a good business. the total market is probably less than $ b/yr, and new archival media have to compete with legacy media, such as hard disk, whose r&d and manufacturing investments have long been amortized. given the long latency of dna storage, to compete with these fully depreciated and much faster media it has to be vastly cheaper. in the future of storage i discussed the fundamental problems of long-lived media such as dna, including: the research we have been doing in the economics of long-term preservation demonstrates the enormous barrier to adoption that accounting techniques pose for media that have high purchase but low running costs, such as these long-lived media. to sum up, while catalog may be able to demonstrate a significant advance in the technology of dna storage, they will still be many orders of magnitude away from a competitive product in the archival storage market. posted by david. at : am labels: green preservation, long-lived media, storage costs, storage media comments: david. said... the team from microsoft research and u.w. have a paper and a video describing a fully-automated write-store-read pipeline for dna. this is, i believe, a first automated end-to-end demonstration. from their abstract: "our device encodes data into a dna sequence, which is then written to a dna oligonucleotide using a custom dna synthesizer, pooled for liquid storage, and read using a nanopore sequencer and a novel, minimal preparation protocol. we demonstrate an automated -byte write, store, and read cycle with a modular design enabling expansion as new technology becomes available." their system is base-at-a-time, so it is still slow: "our system’s write-to-read latency is approximately  h. the majority of this time is taken by synthesis, viz., approximately  s per base, or .  h to synthesize a -mer payload and  h to cleave and deprotect the oligonucleotides at room temperature. after synthesis, preparation takes an additional  min, and nanopore reading and online decoding take  min." again, this is a significant step forward, but a practical product is a long way away. march , at : pm david. said... tom coughlin reports on iridia's chip-based dna storage technology. may , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ▼  march ( ) the links mystery fast cost-reducing writing dna data compression vs. preservation it's the enforcement, stupid! it isn't just cryptocurrency mining demand is far from insatiable ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. what proof of stake is and why it matters - bitcoin magazine: bitcoin news, articles, charts, and guides events culture business technical markets store earn press releases reviews learn subscribe about bitcoin magazine advertise terms of use privacy policy b.tc inc privacy settings articles research store conference buy bitcoin learn articles research store conference buy bitcoin learn what proof of stake is and why it matters author: vitalik buterin publish date: aug , if you have been involved in bitcoin for any significant length of time, you have probably at least heard of the idea of “proof of work”. the basic concept behind proof of work is simple: one party (usually called the prover) presents the result of a computation which is known to be hard to compute, but easy to verify, and by verifying the solution anyone else can be sure that the prover performed a certain amount of computational work to generate the result. the first modern application, presented as “hashcash” by adam back in , uses a sha -based proof of work as an anti-spam measure – by requiring all emails to come with a strong proof-of-work attached, the system makes it uneconomical for spammers to send mass emails while still allowing individuals to send messages to each other when they need to. a similar system is used today for the same purpose in bitmessage, and the algorithm has also been repurposed to serve as the core of bitcoin’s security in the form of “mining”. how does sha proof of work work? sha is what cryptographers call a “one-way function” – a function for which it is easy to calculate an output given an input, but it is impossible to do the reverse without trying every possible input until one works by random chance. the canonical representation of a sha output is as a series of hexadecimal digits – letters and numbers taken from the set abcdef. for example, here are the first digits of a few hashes: sha ("hello") = cf dba...sha ("hello") = f db ...sha ("hello.") = d bd d ... the output of sha is designed to be highly chaotic; even the smallest change in the input completely scrambles the output, and this is part of what makes sha a one-way function. finding an input whose sha starts with ‘ ’ on average takes attempts, ’ ’ takes attempts, and so forth. the way hashcash, and bitcoin mining, work, is by requiring provers (ie. mail senders or miners) to find a “nonce” such that sha (message+nonce)starts with a large number of zeroes, and then send the valid nonce along with the message as the proof of work. for example, the hash of block is: cf c d fc d e b a bc d a on average, it would take trillion attempts to find a nonce that, when hashed together with a block, returns a value starting with this many zeroes (technically, trillion since the pow requirement is a bit more complex than “starts with this many zeroes”, but the general principle is the same). the reason this artificial difficulty exists is to prevent attackers from overpowering the bitcoin network and introducing alternative blockchains that reverse previous transactions and block new transactions; any attacker trying to flood the bitcoin network with their own fake blocks would need to make trillion sha computations to produce each one. however, there is a problem: proof of work is highly wasteful. six hundred trillion sha computations are being performed by the bitcoin network every second, and ultimately these computations have no practical or scientific value; their only purpose is to solve proof of work problems that are deliberately made to be hard so that malicious attackers cannot easily pretend to be millions of nodes and overpower the network. of course, this waste is not inherently evil; given no alternatives, the wastefulness of proof of work may well be a small price to pay for the reward of a decentralized and semi-anonymous global currency network that allows anyone to instantly send money to anyone else in the world for virtually no fee. and in proof of work was indeed the only option. four years later, however, we have developed a number of alternatives. sunny king’s primecoin is perhaps the most moderate, and yet at the same time potentially the most promising, solution. rather than doing away with proof of work entirely, primecoin seeks to make its proof of work useful. rather than using sha computations, primecoin requires miners to look for long “cunningham chains” of prime numbers – chains of values n- , n- , n- , etc. up to some length such that all of the values in the chain are prime (for the sake of accuracy, n+ , n+ , n+ can also be a valid cunningham chain, and primecoin also accepts “bi-twin chains” of the form n- , n+ , n- , n+ where all terms are prime). it is not immediately obvious how these chains are useful – primecoin advocates have pointed to a few theoretical applications, but these all require only chains of length which are trivial to produce. however, the stronger argument is that in modern bitcoin mining the majority of the production cost of mining hardware is actually researching methods of mining more efficiently (asics, optimized circuits, etc) and not building or running the devices themselves, and in a primecoin world this research would go towards finding more efficient ways of doing arithmetic and number theory computation instead – things which have applications far beyond just mining cryptocurrencies. the reason why primecoin-like “useful pows” are the most promising is that, if the computations are useful enough, the currency’s “waste factor” can actually drop below zero, making the currency a public good. for example, suppose that there is a computation which, somehow, has a in chance of getting researchers significantly further along the way to curing cancer. right now, no individual or organization has much of an incentive to attempt it: if they get lucky and succeed, they could either release the secret and earn little personal benefit beyond some short-lived media recognition or they could try to sell it to a few researchers under a non-disclosure agreement, which would rob everyone not under the non-disclosure agreement of the benefits of the discovery and likely not earn too much money in any case. if this magic computation was integrated into a currency, however, the block reward would incentivize many people to perform the computation, and the results of the computations would be visible on the blockchain for everyone to see. the societal reward would be more than worth the electricity cost. however, so far we know of no magical cancer-curing computation; the closest is folding@home, but it lacks mathematical verificability – a dishonest miner can easily cheat by making fake computations that are indistinguishable from real results to any proof of work checker but have no value to society. as far as mathematically verifiable useful pows go, primecoin is the best we have, and whether its societal benefit fully outweighs its production and electricity cost is hard to tell; many people doubt it. but even then, what primecoin accomplished is very praiseworthy; even partially recovering the costs of mining as a public good is better than nothing. proof of stake however, there is one sha alternative that is already here, and that essentially does away with the computational waste of proof of work entirely: proof of stake. rather than requiring the prover to perform a certain amount of computational work, a proof of stake system requires the prover to show ownership of a certain amount of money. the reason why satoshi could not have done this himself is simple: before , there was no kind of digital property which could securely interact with cryptographic protocols. paypal and online credit card payments have been around for over ten years, but those systems are centralized, so creating a proof of stake system around them would allow paypal and credit card providers themselves to cheat it by generating fake transactions. ip addresses and domain names are partially decentralized, but there is no way to construct a proof of ownership of either that could be verified in the future. indeed, the first digital property that could possibly work with an online proof of stake system is bitcoin (and cryptocurrency in general) itself. there have been several proposals on how proof of stake can be implemented; the only one that is currently working in practice, however, is ppcoin, once again created by sunny king. ppcoin’s proof of stake algorithm works as follows. when creating a proof-of-stake block, a miner needs to construct a “coinstake” transaction, sending some money in their possession to themselves as well as a preset reward (like an interest rate, similar to bitcoin’s btc block reward). a sha hash is calculated based only on the transaction input, some additional fixed data, and the current time (as an integer representing the number of seconds since jan , ). this hash is then checked against a proof of work requirement, much like bitcoin, except the difficulty is inversely proportional to the “coin age” of the transaction input. coin age is defined as the size of the transaction input, in ppcoins, multiplied by the time that the input has existed. because the hash is based only on the time and static data, there is no way to make hashes quickly by doing more work; every second, each ppcoin transaction output has a certain chance of producing a valid work proportional to its age and how many ppcoins it contains, and that is that. essentially, every ppcoin can act as a “simulated mining rig”, albeit with the interesting property that its mining power goes up linearly over time but resets to zero every time it finds a valid block. it is not clear if using coin age as ppcoin does rather than just output size is strictly necessary; the original intent of doing so was to prevent miners from re-using their coins multiple times, but ppcoin’s current design does not actually allow miners to consciously try to generate a block with a specific transaction output. rather, the system does the equivalent of picking a ppcoin at random every second and maybe giving its owner the right to create a block. even without including age as a weighting factor in the randomness, this is roughly equivalent to a bitcoin mining setup but without the waste. however, there is one more sophisticated argument in coin age’s favor: because your chance of success goes up the longer you fail to create a block, miners can expect to create blocks more regularly, reducing the incentive to dampen the risk by creating the equivalent of centralized mining pools. beyond cryptocurrency but what makes proof of stake truly interesting is the fact that it can be applied to much more than just currency. so far, anti-spam systems have fallen into three categories: proof of work, captchas and identity systems. proof of work, used in systems like hashcash and bitmessage, we have already discussed extensively above. captchas are used very widely on the internet; the idea is to present a problem that a human can easily solve but a computer can’t, thereby distinguishing the two (captcha stands for “completely automated public turing test to tell computers and humans apart”). in practice, this usually involves presenting a messy image containing letters and numbers, and requiring the solver to type in what the letters and numbers are. recent providers have implemented a “public good” component into the system by making part of the captcha a word from a printed book, using the power of the crowd to digitize old printed literature. unfortunately, captchas are not that effective; recent machine-learning efforts have achieved success rates of - – similar to that of humans themselves. identity systems come in two forms. first, there are systems that require users to register with their physical identity; this is how democracies have so far avoided being overrun by anonymous trolls. second, there are systems that require some fee to get into, and moderators can close accounts without refund if they are found to be trying to abuse the system. these systems work, but at the cost of privacy. proof of stake can be used to provide a fourth category of anti-spam measure. imagine that, instead of filling in a captcha to create a forum account, a user can consume coin age by sending a bitcoin or ppcoin transaction to themselves instead. to make sure each proof of stake computation is done by the user, and not simply randomly pulled from the blockchain, the system might require the user to also send a signed message with the same address, or perhaps send their money back to themselves in a specific way (eg. one of the outputs must contain exactly . xxxxx btc, with the value randomly set each time). note that here coin age is crucial; we want users to be able to create proofs of stake on demand, so something must be consumed to prevent reuse. in a way, a form of proof of stake already exists in the form of sms verification, requiring users to send text messages to prove ownership of a phone to create a google account – although this is hardly pure proof of stake, as phone numbers are also heavily tied with physical identity and the process of buying a phone is itself a kind of captcha. thus, sms verification has some of the advantages and some of the disadvantages of all three systems. but proof of stake’s real advantage is in decentralized systems like bitmessage. currently, bitmessage uses proof of work because it has no other choice; there is no “decentralized captcha” solution out there, and there has been little research into figuring out how to make one. however, proof of work is wasteful, and makes bitmessage a somewhat cumbersome and power-consuming system to use – for emails, it’s fine, but for instant messaging forget about it. but if bitmessage could be integrated into bitcoin (or primecoin or ppcoin) and use it as proof of stake, much of the difficulty and waste could be alleviated. does proof of stake have a future? many signs suggest that it certainly does. ppcoin founder sunny king argues that bitcoin’s security will become too weak over time as its block reward continues to drop; indeed, this is one of his primary motivations for creating ppcoin and primecoin. since then, ppcoin has come to be the fifth largest cryptocurrency on the market, and an increasing number of new cryptocurrencies are copying its proof-of-stake design. currently, ppcoin is not fully proof-of-stake; because it is a small cryptocurrency with a highly centralized community, the risk of some kind of takeover is higher than with bitcoin, so a centralized checkpointing system does exist, allowing developers to create “checkpoints” that are guaranteed to remain part of the transaction history forever regardless of what any attacker does. eventually, the intent is to both move toward making the checkpointing system more decentralized and reducing its power and ppcoins come to be owned by a larger group of people. an alternative approach might be to integrate proof of stake as a decentralized checkpointing system into bitcoin itself; for example, one protocol might allow any coalition of people with at least million btc-years to consume their outputs to generate a checkpoint that the community would agree is a valid block, at the cost of sending their coins to themselves and consuming coin age. in , cryptocurrency emerged as the culmination of a number of unrelated cryptographic primitives: hash functions, merkle trees, proof of work and public key cryptography all play key roles in bitcoin’s construction. now, however, bitcoin and cryptocurrencies are here to stay, and this presents another exciting possibility for the future of cryptography: we can now design protocols that build off of cryptocurrency itself – of which proof of stake is the perfect example. proof of stake can be used to secure a cryptocurrency, it can be used in decentralized anti-spam systems, and probably in dozens of other protocols that we haven’t even thought of yet – just like no one had thought of anything like bitcoin until wei dai’s b-money in . the possibilities are endless. tags terms: hashcashproof of workproof of stakeprimecoinsunny king by vitalik buterin business what libbitcoin and sx are and why they matter by vitalik buterin aug , business nxt – proof of stake and the new alternative altcoin by adam hofman feb , business has this ethereum classic developer solved proof of stake? by aaron van wirdum jan , business feathercoin: interview with peter bushnell by vitalik buterin aug , technical what firstbits is and why you should implement it by vitalik buterin sep , business interview: cryptographer silvio micali on bitcoin, ethereum and proof of stake by amy castor oct , markets primecoin has exchange, casino, already breaking world records by vitalik buterin jul , culture bitcoin is not losing its soul – or, why the regulation hysteria is missing the point by vitalik buterin jun , markets op ed: why bitcoin is still the most valuable cryptocurrency by cole walton jul , business hot cryptocurrency trends: delegated proof of stake by guest author aug , technical bitcoin is not quantum-safe, and how we can fix it when needed by vitalik buterin jul , culture why proof of reserves is important to bitcoin by mauricio di bartolomeo jan , business in defense of alternative cryptocurrencies by vitalik buterin sep , culture is bitcoin truly decentralized? yes – and here is why it’s important by erik voorhees jan , technical quantum computing and building resistance into proof of stake by btc studios oct , loading… see more about bitcoin magazine advertise terms of use privacy policy b.tc inc © close close close close dshr's blog: stablecoins dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, december , stablecoins i have long been skeptical of bitcoin's "price" and, despite its recent massive surge, i'm still skeptical. but it turns out i was wrong two years ago when i wrote in blockchain: what's not to like?: permissionless blockchains require an inflow of speculative funds at an average rate greater than the current rate of mining rewards if the "price" is not to collapse. to maintain bitcoin's price at $ k requires an inflow of $ k/hour. i found it hard to believe that this much actual money would flow in, but since then bitcoin's "price" hasn't dropped below $ k, so i was wrong. caution — i am only an amateur economist, and what follows below the fold is my attempt to make sense of what is going on. first, why did i write that? the economic argument is that, because there is a low barrier to entry for new competitors, margins for cryptocurrency miners are low. so the bulk of their income in terms of mining rewards has to flow out of the system in "fiat" currency to pay for their expenses such as power and hardware. these cannot be paid in cryptocurrencies. at the time, the bitcoin block reward was . btc/block, or btc/hour. at $ k/btc this was $ k/hour, so on average k usd/hour had to flow in from speculators if the system was not to run out of usd. source what has happened since then? miners' income comes in two parts, transaction fees (currently averaging around btc/day) and mining rewards ( btc/day) for a total around k btc/day. at $ k/btc, that is $ k/hour. the combination of halving of the block reward, increasing transaction fees, and quintupling the "price" has roughly tripled the required inflow. second, lets set the context for what has happened in cryptocurrencies in the last year. source in the last year bitcoin's "market cap" went from around $ b to around $ b ( . x) and its "price" went from about $ k to about $ k. source in the last year ethereum's "market cap" went from around $ b to around $ b ( . x) and its "price went from around $ to around $ . source the key observation that explains why i write "price" in quotes is shown in this graph. very little of the trading in btc is in terms of usd, most of it is in terms of tether (usdt). the "price" is set by how many usdt people are prepared to pay for btc, not by how many usd. the usd "price" follows because people believe that usdt ≅ usd.     source in the past year, tether's "market cap" has gone from about b usdt to about b usdt ( x). tether (usdt) is a "stablecoin", intended to maintain a stable price of usd = usdt. initially, tether claimed that it maintained a stable "price" because every usdt was backed by an actual usd in a bank account. does that mean that investors transferred around sixteen billion us dollars into tether's bank account in the past year? no-one believes that. there has never been an audit of tether to confirm what is backing usdt. tether themselves admitted to the new york attorney general in october that: the $ . billion worth of tethers are only % backed: tether has cash and cash equivalents (short term securities) on hand totaling approximately $ . billion, representing approximately percent of the current outstanding tethers. if usdt isn't backed by usd, what is backing it, and is usdt really worth usd? source just in october, tether minted around b usdt. the graph tracks the "price" of bitcoin against the "market cap" of usdt. does it look like they're correlated? amy castor thinks so. tether transfers newly created usdt to an exchange, where one of two things can happen to it: it can be used to buy usd or an equivalent "fiat" currency. but only a few exchanges allow this. for example, coinbase, the leading regulated exchange, will not provide this "fiat off-ramp": please note that coinbase does not support usdt — do not send it to your bitcoin account on coinbase. because of usdt's history and reputation, exchanges that do offer a "fiat off-ramp" are taking a significant risk, so they will impose a spread; the holder will get less than $ . why would you send $ to tether to get less than $ back? it can be used to buy another cryptocurrency, such as bitcoin (btc) or ethereum (eth), increasing demand for that cryptocurrency and thus increasing its price. since newly created usdt won't be immediately sold for "fiat", they will pump the "price" of cryptocurrencies. for simplicity of explanation, lets imagine a world in which there are only usd, usdt and btc. in this world some proportion of the backing for usdt is usd and some is btc. someone sends usd to tether. why would they do that? they don't want usdt as a store of value, because they already have usd, which is obviously a better store of value than usdt. they want usdt in order to buy btc. tether adds the usd to the backing for usdt, and issues the corresponding number of usdt, which are used to buy btc. this pushes the "price" of btc up, which increases the "value" of the part of the backing for usdt that is btc. so tether issues the corresponding amount of usdt, which is used to buy btc. this pushes the "price" of btc up, which increases the "value" of the part of the backing for usdt that is btc. ... tether has a magic "money" pump, creating usdt out of thin air. but there is a risk. suppose for some reason the "price" of btc goes down, which reduces the "value" of the backing for usdt. now there are more usdt in circulation than are backed. so tether must buy some usdt back. they don't want to spend usd for this, because they know that usd are a better store of value than usdt created out of thin air. so they need to sell btc to get usdt. this pushes the "price" of btc down, which reduces the "value" of the part of the backing for usdt that is btc. so tether needs to buy more usdt for btc, which pushes the "price" of btc down. ... the magic "money" pump has gone into reverse, destroying the usdt that were created out of thin air. tether obviously wants to prevent this happening, so in our imaginary world what we would expect to see is that whenever the "price" of btc goes down, tether supplies the market with usdt, which are used to buy btc, pushing the price back up. over time, the btc "price" would generally go up, keeping everybody happy. but there is a second-order effect. over time, the proportion of the backing for usdt that is btc would go up too, because each usd that enters the backing creates r> usd worth of "value" of the btc part of the backing. and, over time, this effect grows because the greater the proportion of btc in the backing, the greater r becomes. source in our imaginary world we would expect to see: the "price" of btc correlated with the number of usdt in circulation. the graph shows this in the real world. both the "price" of btc and the number of usdt in circulation growing exponentially. the graph shows this in the real world. spikes in the number of usdt in circulation following falls in the "price" of btc. is bitcoin really untethered? by john griffin and amit shams shows that: rather than demand from cash investors, these patterns are most consistent with the supply‐based hypothesis of unbacked digital money inflating cryptocurrency prices. their paper was originally published in and updated in and . tether being extremely reluctant to be audited because that would reveal how little money and how much "money" was supporting the btc "price". our imaginary world replicates key features of the real world. of course, since tether has never been audited, we don't know the size or composition of usdt's backing. so we don't know whether tether has implemented a magic "money" pump. but the temptation to get rich quick by doing so clearly exists, and tether's history isn't reassuring about their willingness to skirt the law. because of the feedback loops i described, if they ever dipped a toe in the flow from a magic "money" pump, they would have to keep doubling down. apart from the work of griffin and shams, there is a whole literature pointing out the implausibility of tether's story. here are a few highlights: jp konig's things about tether stablecoins social capital's series explaining tether and the "stablecoin" scam: pumps, spoofs and boiler rooms tether, part one: the stablecoin dream tether, part two: pokedex tether, part three: crypto island price manipulation in the bitcoin ecosystem by neil gandal et al cryptocurrency pump-and-dump schemes by tao li et al patrick mckenzie's tether: the story so far: a friend of mine, who works in finance, asked me to explain what tether was. short version: tether is the internal accounting system for the largest fraud since madoff. bernie madoff's $ . b ponzi scheme was terminated in but credible suspicions had been raised nine years earlier, not least by the indefatigable harry markopolos. credible suspicions were raised against wirecard shortly after it was incorporated in , but even after the financial times published a richly documented series based on whistleblower accounts it took almost a year before wirecard declared bankruptcy owing € . b. massive frauds suffer from a "wile e. coyote" effect. because they are "too big to fail" there is a long time between the revelation that they are frauds, and the final collapse. it is hard for people to believe that, despite numbers in the billions, there is no there there. both investors and regulators get caught up in the excitement and become invested in keeping the bubble inflated by either attacking or ignoring negative news. for example, we saw this in the wirecard scandal: bafin conducted multiple investigations against journalists and short sellers because of alleged market manipulation, in response to negative media reporting of wirecard. ... critics cite the german regulator, press and investor community's tendency to rally around wirecard against what they perceive as unfair attack. ... after initially defending bafin's actions, its president felix hufeld later admitted the wirecard scandal is a "complete disaster". similarly, the cryptocurrency world has a long history of both attacking and ignoring realistic critiques. an example of ignoring is the dao: the decentralized autonomous organization (the dao) was released on th april , but on th may dino mark, vlad zamfir, and emin gün sirer posted a call for a temporary moratorium on the dao, pointing out some of its vulnerabilities; it was ignored. three weeks later, when the dao contained about % of all the ether in circulation, a combination of these vulnerabilities was used to steal its contents. source the graph shows how little of the trading in btc is in terms of actual money, usd. on coinmarketcap.com as i write, usdt has a "market cap" of nearly $ b and the next largest "stablecoin" is usdc, at just over $ . b. usdc is audited and[ ] complies with banking regulations, which explains why it is used so much less. the supply of usdc can't expand enough to meet demand. the total "market cap" of all the cryptocurrencies the site tracks is $ b, an increase of more than % in the last day! so just one day is around the same as bernie madoff's ponzi scheme. the top cryptocurrencies (btc, eth, xrp, usdt) account for $ b ( %) of the total "market cap"; the others are pretty insignificant. david gerard points out the obvious in tether is “too big to fail” — the entire crypto industry utterly depends on it: the purpose of the crypto industry, and all its little service sub-industries, is to generate a narrative that will maintain and enhance the flow of actual dollars from suckers, and keep the party going. increasing quantities of tethers are required to make this happen. we just topped twenty billion alleged dollars’ worth of tethers, sixteen billion of those just since march . if you think this is sustainable, you’re a fool. gerard links to bryce weiner's hopes, expectations, black holes, and revelations — or how i learned to stop worrying and love tether which starts from the incident in april of when bitfinex, the cryptocurrency exchange behind tether, encountered a serious problem: the wildcat bank backing tether was raided by interpol for laundering of criminally obtained assets to the tune of about $ , , . the percentage of that sum which was actually bitfinex is a matter of some debate but there’s no sufficient reason not to think it was all theirs. ... the nature of the problem also presented a solution: instead of backing tether in actual dollars, stuff a bunch of cryptocurrency in a basket to the valuation of the cash that got seized and viola! a black hole is successfully filled with a black hole, creating a stable asset. at the time, usdt's "market cap" was around $ . b, so assuming tether was actually backed by usd at that point, it lost % of its backing. this was a significant problem, more than enough to motivate shenanigans. weiner goes on to provide a detailed explanation, and argue that tether is impossible to shut down. he may be right, but it may be possible to effectively eliminate the "fiat off-ramp", thus completely detaching usdt and usd. this would make it clear that "prices" expressed in usdt are imaginary, not the same as prices expressed in usd. source postscript: david gerard recounts the pump that pushed btc over $ k: we saw about million tethers being lined up on binance and huobi in the week previously. these were then deployed en masse. you can see the pump starting at : utc on december. btc was $ , . on coinbase at : utc. notice the very long candles, as bots set to sell at $ , sell directly into the pump. see cryptocurrency pump-and-dump schemes by tao li, donghwa shin and baolian wang. source ki joung yu watched the pump in real time: lots of people deposited stablecoins to exchanges mins before breaking $ k. price is all about consensus. i guess the sentiment turned around to buy $btc at that time. ... eth block interval is - seconds. this chart means exchange users worldwide were trying to deposit #stablecoins in a single block — seconds. note that " mins" is about one bitcoin block time, and by "exchange users" he means "addresses — it could have been a pre-programmed "smart contract". [ ] david gerard points out that: usdc loudly touts claims that it’s well-regulated, and implies that it’s audited. but usdc is not audited — accountants grant thornton sign a monthly attestation that centre have told them particular things, and that the paperwork shows the right numbers. posted by david. at : am labels: bitcoin comments: david. said... xrp, the third-largest unstablecoin by "market cap", has lost almost % of its value over the last days. this might have something to do with the sec suing ripple labs, who control the centralized cryptocurrency, claiming that xrp is an unregistered security. the sec's argument, bolstered with copious statements by the founders, is at heart that the founders pre-mined and kept vast amounts of xrp, which they then pump and sell: "defendants continue to hold substantial amounts of xrp and — with no registration statement in effect — can continue to monetize their xrp while using the information asymmetry they created in the market for their own gain, creating substantial risk to investors." david gerard points out that ripple labs knew they should have registered: "ripple received legal advice in february and october that xrp could constitute an “investment contract,” thus making it a security under the howey test — particularly if ripple promoted xrp as an investment. the lawyers advised ripple to contact the sec for clarification before proceeding. ripple went ahead anyway, without sec advice — and raised “at least $ . billion” selling xrp from to the present day, promoting it as an investment all the while" izabella kaminska notes that other cryptocurrencies may have similar legal issues: "this may concern other cryptocurrencies such as ethereum and eos, which unlike bitcoin were pre-sold to the public in a similar fashion" december , at : am david. said... izabella kaminska has the highlights of the sec filing against ripple labs. they are really damning. december , at : am david. said... david gerard points to this transaction and asks: "don’t you hate it when you send $ . in btc with a fee of $ , ? i guess they can call bitcoin customer service and get it sorted out! it’s not clear if this transaction ever showed up in the mempool — or if it was the miner putting it directly into the block, and doing some bitcoin-laundering." december , at : pm david. said... the magic "money" pump is working overtime to make santa gifts for the children: "tether has issued million tethers in just the past few days, million of those just today. the market pumpers seem to have been blindsided by the sec suit against ripple, and are trying frantically to avert a christmas crash. i’m sure there’s a ton of institutional investors going all-out on christmas eve." december , at : pm david. said... amy castor collected predictions for from cryptocurrency skeptics in nocoiner predictions: will be a year of comedy gold. they're worth reading. for example: "since , the new york attorney general has been investigating tether and its sister company, crypto exchange bitfinex, for fraud. over the summer, the supreme court ruled that the companies need to hand over their financial records to show once and for all just how much money really is underlying the tethers they keep printing. the nyag said bitfinex/tether have agreed to do so by jan. ." david gerard expanded on his predictions in in crypto and blockchain: your % reliable guide to the future, including: "we’re currently in the throes of a completely fake bitcoin bubble. this is fueled by billions of tethers, backed by loans, or maybe bitcoins, or maybe hot air. large holders are spending corporate money on bitcoins, fundamentally to promote the value of their own holdings. retail hasn’t shown up — there’s a lack of actual dollars in the exchange system. one btc sale last night ( january) dropped the price $ , . if btc crashes the price, then almost nobody will be able to get out without massive losses. the dollars don’t appear to exist when tested." january , at : pm david. said... in parasitic stablecoins time swanson focuses in exhaustive detail on the dependence of stablecoins on the banking system: "this post will go through some of the background for what commercial bank-backed stablecoins are, the loopholes that the issuers try to reside in, how reliant the greater cryptocurrency world is dependent on u.s. and e.u. commercial banks, and how the principles for financial market structures, otherwise known as pfmis, are being ignored" january , at : pm david. said... cas piancy's brief history of tether entitled a tl; dr for tether and imf researcher john kiff's kiffmeister's #fintech daily digest ( / / ) are both worth reading for views on tether. january , at : pm david. said... further regulation of cryptocurrency on- and off-ramps is announced by fincen in the financial crimes enforcement network proposes rule aimed at closing anti-money laundering regulatory gaps for certain convertible virtual currency and digital asset transactions: "the proposed rule complements existing bsa requirements applicable to banks and msbs by proposing to add reporting requirements for cvc and ltda transactions exceeding $ , in value. pursuant to the proposed rule, banks and msbs will have days from the date on which a reportable transaction occurs to file a report with fincen. further, this proposed rule would require banks and msbs to keep records of a customer’s cvc or ltda transactions and counterparties, including verifying the identity of their customers, if a counterparty uses an unhosted or otherwise covered wallet and the transaction is greater than $ , ." january , at : pm david. said... amy castor has transcribed an interview with paolo ardino and stuart hoegner of tether. hoegner is their general counsel, and he said: "we were very clear last summer in court that part of it is in bitcoin. and if nothing else, there are transaction fees that need to be paid on the omni layer. so bitcoin was and is needed to pay for those transactions, so that shouldn’t come as a surprise to anyone. and we don’t presently comment on our asset makeup overall as a general manner, but we are contemplating starting a process of providing updates on that on the website in this year, in ." so my speculation in this post is confirmed. they do have a magic "money" machine. january , at : pm david. said... there's nothing new under the sun. david gerard's stablecoins through history — michigan bank commissioners report, starts: "a “stablecoin” is a token that a company issues, claiming that the token is backed by currency or assets held in a reserve. the token is usually redeemable in theory — and sometimes in practice. stablecoins are a venerable and well-respected part of the history of us banking! previously, the issuers were called “wildcat banks,” and the tokens were pieces of paper. the wildcat banking era, more politely called the “free banking era,” ran from to . banks at this time were free of federal regulation — they could launch just under state regulation. under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. the quality of these reserves could be a matter of some dispute. the wildcat banks didn’t work out so well. the national bank act was passed in , establishing the united states national banking system and the office of the comptroller of the currency — and taking away the power of state banks to issue paper notes." go read the whole post - the parallels with cryptocurrencies are striking. january , at : pm david. said... in tether publishes … two pie charts of its reserves, david gerard analyses the uninformative "information" tether published about its reserves: "i’m analysing tether’s numbers on the basis that they aren’t just made up, and mean something in any conventional sense. it’s reasonable to doubt this — tether’s been caught directly lying before — but previous tether numbers have tended to have some sort of justification, if only a laughably flimsy one that meets no accepted accounting standards." and amy castor piles on in tether’s first breakdown of reserves consists of two silly pie charts including this gem: "specifically, this is a breakdown of the composition of tether’s reserves on march , , when tether had roughly . billion tethers in circulation. (as of this writing, tether now has nearly billion tethers in circulation.)" so tether is pumping the money supply at nearly $ b a week! may , at : am david. said... jemima kelley is also all over tether's "transparency" in tether says its reserves are  backed by cash to the tune of . . .  . %: "it’s almost like tether thinks it is some kind of bank, isn’t it? well, kind of. in the affadavit, hoegner pointed out that commercial banks operate under a similar “fractional reserve” system, and that this was “hardly a novel concept”. but . per cent is really quite the fraction isn’t it? and the difference here, of course, is that commercial banks are subject to stringent regulations and thorough independent audits, neither of which apply to tether." may , at : am david. said... frances coppola makes an important point in tether’s smoke and mirrors. tether's terms of service place them under no obligation to redeem tethers for fiat or indeed for anything at all: "tether reserves the right to delay the redemption or withdrawal of tether tokens if such delay is necessitated by the illiquidity or unavailability or loss of any reserves held by tether to back to tether tokens, and tether reserves the right to redeem tether tokens by in-kind redemptions of securities and other assets held in the reserves." coppola points out that: "if tether is simply going to refuse redemption requests or offer people tokens it has just invented instead of fiat currency, it wouldn’t matter if the entire asset base was junk, since it will never have any significant need for cash. so whether tether’s "reserves" are cash equivalents doesn't matter. but what does matter is capital. for banks, funds and other financial institutions, capital is the difference between assets and liabilities. it is the cushion that can absorb losses from asset price falls, whether because of fire sales to raise cash for redemption requests or simply from adverse market movements or creditor defaults. the accountant's attestations reveal that tether has very little capital. the gap between assets and liabilities is paper-thin: on st march (pdf), for example, it was . % of total consolidated assets, on a balance sheet of more than $ bn in size. stablecoin holders are thus seriously exposed to the risk that asset values will fall sufficiently for the par peg to usd to break – what money market funds call “breaking the buck”." go read the whole post. may , at : pm david. said... simon sharwood reports on an actual use case for tether in hong kong busts $ m crypto money-laundering ring: "hong kong’s customs and excise department yesterday arrested four men over alleged money-laundering using cryptocurrency. the department says it detected multiple transactions in a coin named “tether”, with value bouncing between a crypto exchange, local banks, another crypto exchange, and banks in singapore. hk$ . bn (us$ m) is alleged to have been laundered by the four suspects, in what authorities said was the first case of crypto-laundering detected in the special administrative region (sar). the launderers were busy: multiple daily transactions of hk$ m were sometimes detected as they went about their scheme, which ran from early to may ." july , at : am david. said... fais kahn has two posts, crypto and the infinite ladder: what if tether is fake? and bitcoin's end: tether, binance and the white swans that could bring it all down about the binance/tether nexus that are well worth reading. he concludes: "everything around binance and tether is murky, even as these entities two dominate the crypto world. tether redemptions are accelerating, and binance is in trouble, but why some of these things are happening is guesswork. and what happens if something happens to one of those two? we’re entering some uncharted territory. but if things get weird, don’t say no one saw it coming." july , at : pm david. said... taming wildcat stablecoins by gary gorton and jeffery zhang analyzes stablecoins with a historical perspective, starting with the th century "free banking" era in the us. zhang is a lawyer at the fed. izabella kaminska summarizes their argument in gorton turns his attention to stablecoins,and points out that: "gary gorton has gained a reputation for being something of an experts’ expert on financial systems. despite being an academic, this is in large part due to what might be described as his practitioner’s take on many key issues. the yale school of management professor is, for example, best known for a highly respected (albeit still relatively obscure) theory about the role played in bank runs by information-sensitive assets." july , at : am david. said... tom schoenberg, matt robinson, and zeke faux report that tether executives said to face criminal probe into bank fraud: "a u.s. probe into tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case that would have broad implications for the cryptocurrency market. ... specifically, federal prosecutors are scrutinizing whether tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential." ... federal prosecutors have been circling tether since at least . in recent months, they sent letters to individuals alerting them that they’re targets of the investigation, one of the people said." july , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ▼  december ( ) michael nelson's group on archiving twitter stablecoins risc vs. cisc max ungrounding ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. #dlfteach pedagogy toolkit cfp request edit access #dlfteach pedagogy toolkit cfp see cfp: https://dlfteach.pubpub.org/dlfteach-toolkit- -cfp -proposals are due by september st, and should be limited to words. -when writing your proposals, please consider the toolkit template: https://rb.gy/huhgc -proposals should include: * a description of your lesson * learning outcomes * a statement on the literacies involved in your lesson * note any collaborators (collaboration with other instructional partners is encouraged!) * required participation * i am submitting a proposal. i would like to be a peer reviewer. i am submitting a proposal and would like to be a peer reviewer. contributor name (main contact) * your answer email address * your answer institution * your answer additional contributors and email addresses format as follows and use commas between contributors: first name last name, email address your answer proposal ( words max) your answer submit never submit passwords through google forms. this form was created inside of bc. report abuse  forms     dshr's blog: securing the software supply chain dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, december , securing the software supply chain this is the second part of a series about trust in digital content that might be called: is this the real life? is this just fantasy? the first part was certificate transparency, about how we know we are getting content from the web site we intended to. this part is about how we know we're running the software we intended to. this question, how to defend against software supply chain attacks, has been in the news recently: a hacker or hackers sneaked a backdoor into a widely used open source code library with the aim of surreptitiously stealing funds stored in bitcoin wallets, software developers said monday. the malicious code was inserted in two stages into event-stream, a code library with million downloads that's used by fortune companies and small startups alike. in stage one, version . . , published on september , included a benign module known as flatmap-stream. stage two was implemented on october when flatmap-stream was updated to include malicious code that attempted to steal bitcoin wallets and transfer their balances to a server located in kuala lumpur. see also here and here. the good news is that this was a highly specific attack against a particular kind of cryptocurrency wallet software; things could have been much worse. the bad news is that, however effective they may be against some supply chain attacks, none of the techniques i discuss below the fold would defend against this particular attack. in an important paper entitled software distribution transparency and auditability, benjamin hof and georg carle from tu munich use debian's advanced package tool (apt) as an example of a state-of-the-art software supply chain, and: describe how apt works to maintain up-to-date software on clients by distributing signed packages. review previous efforts to improve the security of this process. propose to enhance apt's security by layering a system similar to certificate transparency (ct) on top. detail the operation of their systems' logs, auditors and monitors, which are similar to ct's in principle but different in detail. describe and measure the performance of an implementation of their layer on top of apt using the trillian software underlying some ct implementations. there are two important "missing pieces" in their system, and all the predecessors, which are the subjects of separate efforts: reproducible builds. bootstrappable compilers. how apt works a system running debian or other apt-based linux distribution runs software it received in "packages" that contain the software files, and metadata that includes dependencies. their hashes can be verified against those in a release file, signed by the distribution publisher. packages come in two forms, source and compiled. the source of a package is signed by the official package maintainer and submitted to the distribution publisher. the publisher verifies the signature and builds the source to form the compiled package, whose hash is then included in the release file. the signature on the source package verifies that the package maintainer approves this combination of files for the distributor to build. the signature on the release file verifies that the distributor built the corresponding set of packages from approved sources and that the combination is approved for users to install. previous work it is, of course, possible for the private keys on which the maintainer's and distributor's signatures depend to be compromised: samuel et al. consider compromise of signing keys in the design of the update framework (tuf), a secure application updater. to guard against key compromise, tuf introduces a number of different roles in the update release process, each of which operates cryptographic signing keys. the following three properties are protected by tuf. the content of updates is secured, meaning its integrity is preserved. securing the availability of updates protects against freeze attacks, where an outdated version with known vulnerabilities is served in place of a security update. the goal of maintaining the correct combination of updates implies the security of meta data. the goal of introducing multiple roles each with its own key is to limit the damage a single compromised key can do. an orthogonal approach is to implement multiple keys for each role, with users requiring a quorum of verified signatures before accepting a package: nikitin et al. develop chainiac, a system for software update transparency. software developers create a merkle tree over a software package and the corresponding binaries. this tree is then signed by the developer, constituting release approval. the signed trees are submitted to co-signing witness servers. the witnesses require a threshold of valid developer signatures to accept a package for release. additionally, the mapping between source and binary is verified by some of the witnesses. if these two checks succeed, the release is accepted and collectively signed by the witnesses. the system allows to rotate developer keys and witness keys, while the root of trust is an offline key. it also functions as a timestamping service, allowing for verification of update timeliness. ct-like layer hof and carle's proposal is to use verifiable logs, similar to those in ct, to ensure that malfeasance is detectable. they write: compromise of components and collusion of participants must not result in a violation of the following security goals remaining undetected. a goal of our system is to make it infeasible for the attacker to deliver targeted backdoors. for every binary, the system can produce the corresponding source code and the authorizing maintainer. defined irregularities, such as a failure to correctly increment version numbers, also can be detected by the system. as i understand it, this is accurate but somewhat misleading. their system adds a transparency layer on top of apt: the apt release file identifies, by cryptographic hash, the packages, sources, and meta data which includes dependencies. this release file, meta data, and source packages are submitted to a log server operating an appendonly merkle tree, as shown in figure . the log adds a new leaf for each file. we assume maintainers may only upload signed source packages to the archive, not binary packages. the archive submits source packages to one or more log servers. we further assume that the buildinfo files capturing the build environment are signed and are made public, e.g. by them being covered by the release file, together with other meta data. in order to make the maintainers uploading a package accountable, a source package containing all maintainer keys is created and submitted into the archive. this constitutes the declaration by the archive, that these keys were authorized to upload for this release. the key ring is required to be append-only, where keys are marked with an expiry date instead of being removed. this allows verification of source packages submitted long ago, using the keys valid at the respective point in time. just as with ct, the log replies to each valid submission with a signed commitment, guaranteeing that it will shortly produce the signed root of a merkle tree that includes the submission: at release time, meta data and release file are submitted into the log as well. the log server produces a commitment for each submission, which constitutes its promise to include the submitted item into a future version of the tree. the log only accepts authenticated submissions from the archive. the commitment includes a timestamp, hash of the release file, log identifier and the log's signature over these. the archive should then verify that the log has produced a signed tree root that resolves the commitment. to complete the release, the archive publishes the commitments together with the updates. clients can then proceed with the verification of the release file. the log regularly produces signed merkle tree roots after receiving a valid inclusion request. the signed tree root produced by the log includes the merkle tree hash, tree size, timestamp, log identifier, and the log's signature. the client now obtains from the distribution mirror not just the release file, but also one or more inclusion commitments showing that the release file has been submitted to one or more of the logs trusted both by the distributor and the client: given the release file and inclusion commitment, the client can verify by hashing that the commitment belongs to this release file and also verify the signature. the client can now query the log, demanding a current tree root and an inclusion proof for this release file. per standard merkle tree proofs, the inclusion proof consists of a list of hashes to recompute the received root hash. for the received tree root, a consistency proof is demanded to a previous known tree root. the consistency proof is again a list of hashes. for the two given tree roots, it shows that the log only added items between them. clients store the signed tree root for the largest tree they have seen, to be used in any later consistency proofs. set aside split view attacks, which will be discussed later, clients verifying the log inclusion of the release file will detect targeted modifications of the release. like ct, in addition to logs their system includes auditors, typically integrated with clients, and independent monitors regularly checking the logs for anomalies. for details, you need to read the paper, but some idea can be gained from their description of how the system detects two kinds of attack: the hidden version attack the split view attack the hidden version attack hof and carle describe this attack thus: the hidden version attack attempts to hide a targeted backdoor by following correct signing and log submission procedures. it may require collusion by the archive and an authorized maintainer. the attacker prepares targeted malicious update to a package, say version v . . , and a clean update v . . . the archive presents the malicious package only to the victim when it wishes to update. the clean version v. . . will be presented to everybody immediately afterwards. a non-targeted user is unlikely to ever observe the backdoored version, thereby drawing a minimal amount of attention to it. the attack however leaves an audit trail in the log, so the update itself can be detected by auditing. a package maintainer monitoring uploads for their packages using the log would notice an additional version being published. a malicious package maintainer would however not alert the public when this happens. this could be construed as a targeted backdoor in violation of the stated security goals. it is true that the backdoored package would be in the logs, but that in and of itself does not indicate that it is malign: to mitigate this problem a minimum time between package updates can be introduced. this can be achieved by a fixing the issuance of release files and their log submission to a static frequency, or by alerting on quick subsequent updates to one package. there may be good reasons for releasing a new update shortly after its predecessor; for example a vulnerability might be discovered in the predecessor shortly after release. in the hidden version attack, the attacker increases a version number in order to get the victim to update a package. the victim will install this backdoored update. the monitor detects the hidden version attack due to the irregular release file publication. there are now two cases to be considered. the backdoor may be in the binary package, or it may be in the source package. the first case will be detected by monitors verifying the reproducible builds property. a monitor can rebuild all changed source packages on every update and check if the resulting binary matches. if not, the blame falls clearly on the archive, because the source does not correspond to the binary, which can be demonstrated by exploiting reproducible builds. the second case requires investigation of the packages modified by the update. the source code modifications can be investigated for the changed packages, because all source code is logged. the fact that source code can be analyzed, and no analysis on binaries is required, makes the investigation of the hidden version alert simpler. the blame for this case falls on the maintainer, who can be identified by their signature on the source package. if the upload was signed by a key not in the allowed set, the blame falls on the archive for failing to authorize correctly. if the package version numbers in the meta data are inconsistent, this constitutes a misbehavior by the submitting archive. it can easily be detected by a monitor. using the release file the monitor can also easily ensure, by demanding inclusion proofs, that all required files have been logged. note that although their system's monitors detect this attack, and can correctly attribute it, they do so asynchronously. they do not prevent the victim installing the backdoored update. the split view attack the logs cannot be assumed to be above suspicion. hof and carle describe a log-based attack: the most significant attack by the log or with the collusion of the log is equivocation. in a split-view or equivocation attack, a malicious log presents different versions of the merkle tree to the victim and to everybody else. each tree version is kept consistent in itself. the tree presented to the victim will include a leaf that is malicious in some way, such as an update with a backdoor. it might also omit a leaf in order to hide an update. this is a powerful attack within the threat model that violates the security goals and must therefore be defended. a defense against this attack requires the client to learn if they are served from the same tree as the others. their defense requires that their be multiple logs under independent administration, perhaps run by different linux distributions. each time a "committing" log generated a new tree root containing new package submissions, it would be required to submit a signed copy of the root to one or more "witness" logs under independent administration. the "committing" log will obtain commitments from the "witness" logs, and supply them to clients. clients can then verify that the root they obtain from the "committing" log matches that obtained directly from the "witness" logs: when the client now verifies a log entry with the committing log, it also has to verify that a tree root covering this entry was submitted into the witnessing log. additionally, the client verifies the append-only property of the witnessing log. the witnessing log introduces additional monitoring requirements. next to the usual monitoring of the append-only operation, we need to check that no equivocating tree roots are included. to this end, a monitor follows all new log entries of the witnessing log that are tree roots of the committing log. the monitor verifies that they are all valid extensions of the committing log's tree history. reproducible builds one weakness in hof and carle's actual implementation is in the connection between the signed package of source and the hashes of the result of compiling it. it is in general impossible to verify that the binaries are the result of compiling the source. in many cases, even if the source is re-compiled in the same environment the resulting binaries will not be bit-for-bit identical, and thus their hashes will differ. the differences have many causes, including timestamps, randomized file names, and so on. of course, changes in the build environment can also introduce differences. to enable binaries to be securely connected to their source, a reproducible builds effort has been under way for more than years. debian project lead chris lamb's -minute talk think you're not a target? a tale of developers ... provides an overview of the problem and the work to solve it using three example compromises: alice, a package developer who is blackmailed to distribute binaries that don't match the public source. bob, a build farm sysadmin whose personal computer has been compromised, leading to a compromised build toolchain in the build farm that inserts backdoors into the binaries. carol, a free software enthusiast who distributes binaries to friends. an evil maid attack has compromised her laptop. as lamb describes, eliminating all sources of irreproducibility from a package is a painstaking process because there are so many possibilities. they include non-deterministic behaviors such as iterating over hashmaps, parallel builds, timestamps, build paths, file system directory name order, and so on. the work started in with % of debian packages building reproducibly. currently, over % of the debian packages are now reproducible. that is good, but % coverage is really necessary to provide security. bootstrappable compilers one of the most famous of the acm's annual turing award lectures was ken thompson's reflections on trusting trust (also here). in , bruce schneier summarized its message thus: way back in , paul karger and roger schell discovered a devastating attack against computer systems. ken thompson described it in his classic speech, "reflections on trusting trust." basically, an attacker changes a compiler binary to produce malicious versions of some programs, including itself. once this is done, the attack perpetuates, essentially undetectably. thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing thompson to log in as root without using a password. the victim never noticed the attack, even when they disassembled the binaries -- the compiler rigged the disassembler, too. schneier was discussing david a. wheeler's countering trusting trust through diverse double-compiling. wheeler's subsequent work led to his ph.d. thesis. to oversimpify, his technique involves the suspect compiler compiling its source twice, and comparing the output to that from a "trusted" compiler compiling the same source twice. he writes: ddc uses a second “trusted” compiler ct, which is trusted in the sense that we have a justified confidence that ct does not have triggers or payloads there are two issues here. the first is an assumption that the suspect compiler's build is reproducible. the second is the issue of where the "justified confidence" comes from. this is the motivation for the bootstrappable builds project, whose goal is to create a process for building a complete toolchain starting from a "seed" binary that is simple enough to be certified "by inspection". one sub-project is stage : stage starts with just a byte hex monitor and builds up the infrastructure required to start some serious software development. with zero external dependencies, with the most painful work already done and real langauges such as assembly, forth and garbage collected lisp already implemented the current . . release of stage : marks the first c compiler hand written in assembly with structs, unions, inline assembly and the ability to self-host it's c version, which is also self-hosting there is clearly a long way still to go to a bootstrapped full toolchain. a more secure software supply chain a software supply chain based on apt enhanced with hof and carle's transparency layer, distributing packages reproducibly built with bootstrapped compilers, would be much more difficult to attack than current technology. users of the software could have much higher confidence that the binaries they installed had been built from the corresponding source, and that no attacker had introduced functionality not evident in the source. these checks would take place during software installation or update. users would still need to verify that the software had not been modified after installation, perhaps using a tripwire-like mechanism, but this mechanism would have a trustworthy source of the hashes it needs to do its job. remaining software problems despite all these enhancements, the event-stream attack would still have succeeded. the attackers targeted a widely-used, fairly old package that was still being maintained by the original author, a volunteer. they offered to take over what had become a burdensome task, and the offer was accepted. now, despite the fact that the attacker was just an e-mail address, they were the official maintainer of the package and could authorize changes. their changes, being authorized by the official package maintainer, would pass unimpeded through even the enhanced supply chain. first, it is important to observe the goal of hof and carle's system is to detect targeted attacks, those delivered to a (typically small) subset of user systems. the event-stream attack was not targeted; it was delivered to all systems updating the package irrespective of whether they contained the wallet to be compromised. that their system is designed only to detect targeted attacks seems to me to be a significant weakness. it is very easy to design an attack, like the event-stream one, that is broadcast to all systems but is harmless on all but the targets. second, hof and carle's system operates asynchronously, so is intended to detect rather than prevent victim compromise. of course, once the attack was detected it could be unambiguously attributed. but: the attack would already have succeeded in purloining cryptocurrency from the target wallets. this seems to me to be a second weakness; in many cases the malign package would only need to be resident on the victim for a short time to exfiltrate critical data, or install further malware providing persistence. strictly speaking, the attribution would be to a private key. more realistically, it would be to a key and an e-mail address. in the case of an attack, linking these to a human malefactor would likely be difficult, leaving the perpetrators free to mount further attacks. even if the maintainer had not, as in the event-stream attack, been replaced via social engineering, it is possible that their e-mail and private key could have been compromised. the event-stream attack can be thought of as the organization-level analog of a sybil attack on a peer-to-peer system. creating an e-mail identity is almost free. the defense against sybil attacks is to make maintaining and using an identity in the system expensive. as with proof-of-work in bitcoin, the idea is that the white hats will spend more (compute more useless hashes) than the black hats. even this has limits. eric budish's analysis shows that, if the potential gain from an attack on a blockchain is to be outweighed by its cost, the value of transactions in a block must be less than the block reward. would a similar defense against "sybil" attacks on the software supply chain be possible? there are a number of issues: the potential gains from such attacks are large, both because they can compromise very large numbers of systems quickly (event-stream had m downloads), and because the banking credentials, cryptocurrency wallets, and other data these systems contain can quickly be converted into large amounts of cash. thus the penalty for mounting an attack would have to be an even larger amount of cash. package maintainers would need to be bonded or insured for large sums, which implies that distributions and package libraries would need organizational structures capable of enforcing these requirements. bonding and insurance would be expensive for package maintainers, who are mostly unpaid volunteers. there would have to be a way of paying them for their efforts, at least enough to cover the costs of bonding and insurance. thus users of the packages would need to pay for their use, which means the packages could neither be free, nor open source. the foss (free open source software) movement will need to find other ways to combat sybil attacks, which will be hard if the reward for a successful attack greatly exceeds the cost of mounting it. how to adequately reward maintainers for their essential but under-appreciated efforts is a fundamental problem for foss. hof and carle's system shares one more difficulty with ct. both systems are layered on top of an existing infrastructure, respectively apt and tls with certificate authorities. in both cases there is a bootstrap problem, an assumption that as the system starts up there is not an attack already underway. in ct's case the communications between the ca's, web sites, logs, auditors and monitors all use the very tls infrastructure that is being secured (see here and here). this is also the case for hof and carle, plus they have to assume the lack of malware in the initial state of the packages. hardware supply chain problems all this effort to secure the software supply chain will be for naught if the hardware it runs on is compromised: much of what we think of as "hardware" contains software to which what we think of as "software" has no access or visibility. examples include intel's management engine, the baseband processor in mobile devices, complex i/o devices such as nics and gpus. even if this "firmware" is visible to the system cpu, it is likely supplied as a "binary blob" whose source code is inaccessible. attacks on the hardware supply chain have been in the news recently, with the firestorm of publicity sparked by bloomberg's, probably erroneous reports, of a chinese attack on supermicro motherboards that added "rice-grain" sized malign chips. the details will have to wait for a future post. posted by david. at : am labels: security comments: bryan newbold said... nit: in the last bullet point, i think you mean "bloomberg", not "motherboard". december , at : pm david. said... thanks for correcting my fused neurons, bryan! december , at : pm david. said... i really should have pointed out that this whole post is about software that is installed on your device. these days, much of the software that runs on your device is not installed, it is delivered via ad networks and runs inside your browser. as blissex wrote in this comment, we are living: "in an age in which every browser gifts a free-to-use, unlimited-usage, fast vm to every visited web site, and these vms can boot and run quite responsive d games or linux distributions" ad blockers, essential equipment in this age, merely reduce the incidence of malware delivered via ad networks. brannon dorsey's fascinating experiments in malvertising are described by cory doctorow thus: "anyone can make an account, create an ad with god-knows-what javascript in it, then pay to have the network serve that ad up to thousands of browser. ... within about three hours, his code (experimental, not malicious, apart from surreptitiously chewing up processing resources) was running on , web browsers, on , unique ip addresses. adtech, it turns out, is a superb vector for injecting malware around the planet. some other fun details: dorsey found that when people loaded his ad, they left the tab open an average of minutes. that gave him huge amounts of compute time -- full days, in fact, for about $ in ad purchase." december , at : pm david. said... i regret not citing john leyden's open-source software supply chain vulns have doubled in months to illustrate the scope of the problem: "miscreants have even started to inject (or mainline) vulnerabilities directly into open source projects, according to sonatype, which cited recent examples of this type of malfeasance in its study. el reg has reported on several such incidents including a code hack on open-source utility eslint-scope back in july." and: "organisations are still downloading vulnerable versions of the apache struts framework at much the same rate as before the equifax data breach, at around , downloads per month. downloads of buggy versions of another popular web application framework called spring were also little changed since a september vulnerability, sonatype added. the , average in september has declined only per cent to , over the last months." december , at : am david. said... catalin cimpanu's users report losing bitcoin in clever hack of electrum wallets describes a software supply chain attack that started around st december and netted around $ k "worth" of btc. december , at : am david. said... popular wordpress plugin hacked by angry former employee is like the event-stream hack in that no amount of transparency would have prevented it. the disgruntled perpetrator apparently had valid credentials for the official source of the software: "the plugin in question is wpml (or wp multilingual), the most popular wordpress plugin for translating and serving wordpress sites in multiple languages. according to its website, wpml has over , paying customers and is one of the very few wordpress plugins that is so reputable that it doesn't need to advertise itself with a free version on the official wordpress.org plugins repository." january , at : am david. said... the fourth annual report for the national security adviser from the huawei cyber security evaluation centre oversight board in the uk is interesting. the centre has access to the source code for huawei products, and is working with huawei to make the builds reproducible: " . hcsec have worked with huawei r&d to try to correct the deficiencies in the underlying build and compilation process for these four products. this has taken significant effort from all sides and has resulted in a single product that can be built repeatedly from source to the general availability (ga) version as distributed. this particular build has yet to be deployed by any uk operator, but we expect deployment by uk operators in the future, as part of their normal network release cycle. the remaining three products from the pilot are expected to be made commercially available in h , with each having reproducible binaries." january , at : am david. said... huawei says fixing "the deficiencies in the underlying build and compilation process" in its carrier products will take five years. february , at : pm david. said... in cyber-mercenary groups shouldn't be trusted in your browser or anywhere else, the eff's cooper quintin describes the latest example showing why certificate authorities can't be trusted: "darkmatter, the notorious cyber-mercenary firm based in the united arab emirates, is seeking to become approved as a top-level certificate authority in mozilla’s root certificate program. giving such a trusted position to this company would be a very bad idea. darkmatter has a business interest in subverting encryption, and would be able to potentially decrypt any https traffic they intercepted. one of the things https is good at is protecting your private communications from snooping governments—and when governments want to snoop, they regularly hire darkmatter to do their dirty work. ... darkmatter was already given an "intermediate" certificate by another company, called quovadis, now owned by digicert. that's bad enough, but the "intermediate" authority at least comes with ostensible oversight by digicert." hat tip to cory doctorow. february , at : pm david. said... gareth corfield's just android things: m phones, gadgets installed 'adware-ridden' mobe simulator games reports on a very successful software supply chain attack: "android adware found its way into as many as million devices – after it was stashed inside a large number of those bizarre viral mundane job simulation games, we're told. ... although researchers believed that the titles were legitimate, they said they thought the devs were “scammed” into using a “malicious sdk, unaware of its content, leading to the fact that this campaign was not targeting a specific country or developed by the same developer.” march , at : am david. said... kim zetter's hackers hijacked asus software updates to install backdoors on thousands of computers is an excellent example of a software supply chain attack: "researchers at cybersecurity firm kaspersky lab say that asus, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. the malicious file was signed with legitimate asus digital certificates to make it appear to be an authentic software update from the company, kaspersky lab says." march , at : am david. said... sean gallagher's uk cyber security officials report huawei’s security practices are a mess reports on the latest report from the hcsec oversight board. they still can't do reproducible builds: "hcsec reported that the software build process used by huawei results in inconsistencies between software images. in other words, products ship with software with widely varying fingerprints, so it’s impossible to determine whether the code is the same based on checksums." which isn't a surprise, huawei already said it'd take another years. but i'd be more concerned that: "one major problem cited by the report is that a large portion of huawei’s network gear still relies on version . of wind river’s vxworks real-time operating system (rtos), which has reached its “end of life” and will soon no longer be supported. huawei has bought a premium long-term support license from vxworks, but that support runs out in ." and huawei is rolling its own rtos based on linux. what could possibly go wrong? march , at : pm david. said... the latest software supply chain attack victim is bootstrap-sass via rubygems, with about m downloads. april , at : am david. said... it turns out that shadowhammer targets multiple companies, asus just one of them: "asus was not the only company targeted by supply-chain attacks during the shadowhammer hacking operation as discovered by kaspersky, with at least six other organizations having been infiltrated by the attackers. as further found out by kaspersky's security researchers, asus' supply chain was successfully compromised by trojanizing one of the company's notebook software updaters named asus live updater which eventually was downloaded and installed on the computers of tens of thousands of customers according to experts' estimations." april , at : pm david. said... who owns huawei? by christopher balding and donald c. clarke concludes that: "huawei calls itself “employee-owned,” but this claim is questionable, and the corporate structure described on its website is misleading." april , at : pm david. said... david a. wheeler reports on another not-very-successful software supply chain attack: "a malicious backdoor has been found in the popular open source software library bootstrap-sass. this was done by someone who created an unauthorized updated version of the software on the rubygems software hosting site. the good news is that it was quickly detected (within the day) and updated, and that limited the impact of this subversion. the backdoored version ( . . . ) was only downloaded , times. for comparison, as of april the previous version in that branch ( . . . ) was downloaded . million times, and the following version . . . (which duplicated . . . ) was downloaded , times (that’s more than the subverted version!). so it is likely that almost all subverted systems have already been fixed." wheeler has three lessons from this: . maintainers need fa. . don't update your dependencies in the same day they're released. . reproducible builds! may , at : am david. said... andy greenberg's a mysterious hacker gang is on a supply-chain hacking spree ties various software supply chain attacks together and attributes them: "over the past three years, supply-chain attacks that exploited the software distribution channels of at least six different companies have now all been tied to a single group of likely chinese-speaking hackers. the group is known as barium, or sometimes shadowhammer, shadowpad, or wicked panda, depending on which security firm you ask. more than perhaps any other known hacker team, barium appears to use supply-chain attacks as its core tool. its attacks all follow a similar pattern: seed out infections to a massive collection of victims, then sort through them to find espionage targets." may , at : am david. said... someone is spamming and breaking a core component of pgp’s ecosystem by lorenzo franceschi-bicchierai reports on an attack on two of the core pgp developers,robert j. hansen and daniel kahn gillmor : "last week, contributors to the pgp protocol gnupg noticed that someone was “poisoning” or “flooding” their certificates. in this case, poisoning refers to an attack where someone spams a certificate with a large number of signatures or certifications. this makes it impossible for the the pgp software that people use to verify its authenticity, which can make the software unusable or break. in practice, according to one of the gnupg developers targeted by this attack, the hackers could make it impossible for people using linux to download updates, which are verified via pgp." the problem lies in the sks keyserver: "the sks software was written in an obscure language by a phd student for his thesis. and because of that, according to hansen, “there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.” in other words, these attacks are here to stay." july , at : pm david. said... dan goodin's the year-long rash of supply chain attacks against open source is getting worse is a useful overview of the recent incidents pointing to the need for verifiable logs and reproducible builds. and, of course, for requiring developers to use multi--factor authentication. august , at : pm david. said... catalin cimpanu's hacking high-profile dev accounts could compromise half of the npm ecosystem is based on small world with high risks:a study of security threats in the npm ecosystem by marcus zimmerman et al: "their goal was to get an idea of how hacking one or more npm maintainer accounts, or how vulnerabilities in one or more packages, reverberated across the npm ecosystem; along with the critical mass needed to cause security incidents inside tens of thousands of npm projects at a time. ... the normal npm javascript package has an abnormally large number of dependencies -- with a package loading third-party packages from different maintainers, on average. this number is lower for popular packages, which only rely on code from other maintainers, on average, but the research team found that some popular npm packages ( ) relied on code written by more than maintainers. ... " highly influential maintainers affect more than , packages, making them prime targets for attacks," the research team said. "if an attacker manages to compromise the account of any of the most influential maintainers, the community will experience a serious security incident." furthermore, in a worst-case scenario where multiple maintainers collude, or a hacker gains access to a large number of accounts, the darmstadt team said that it only takes access to popular npm maintainer accounts to deploy malicious code impacting more than half of the npm ecosystem." october , at : am david. said... five years after the equation group hdd hacks, firmware security still sucks by catalin cimpanu illustrates how far disk drive firmware security is ahead of the rest of the device firmware world: "in , security researchers from kaspersky discovered a novel type of malware that nobody else had seen before until then. the malware, known as nls_ .dll, had the ability to rewrite hdd firmware for a dozen of hdd brands to plant persistent backdoors. kaspersky said the malware was used in attacks against systems all over the world. kaspersky researchers claimed the malware was developed by a hacker group known as the equation group, a codename that was later associated with the us national security agency (nsa). knowing that the nsa was spying on their customers led many hdd and ssd vendors to improve the security of their firmware, eclypsium said. however, five years since the equation group's hdd implants were found in the wild and introduced the hardware industry to the power of firmware hacking, eclypsium says vendors have only partially addressed this problem. "after the disclosure of the equation group's drive implants, many hdd and ssd vendors made changes to ensure their components would only accept valid firmware. however, many of the other peripheral components have yet to follow suit," researchers said." february , at : pm david. said... marc ohm et al analyze supply chain attacks via open source packages in three reposiotries in backstabber’s knife collection: a review of open source software supply chain attacks: "this paper presents a dataset of malicious software packages that were used in real-world attacks on open source software supply chains,and which were distributed via the popular package repositories npm, pypi, and rubygems. those packages, dating from november to november , were manually collected and analyzed. the paper also presents two general attack trees to provide a structured overview about techniques to inject malicious code into the dependency tree of downstream users, and to execute such code at different times and under different conditions." may , at : am david. said... bruce schneier's survey of supply chain attacks starts: "the atlantic council has a released a report that looks at the history of computer supply chain attacks." the atlantic council also has a summary of the report entitled breaking trust: shades of crisis across an insecure software supply chain: "software supply chain security remains an under-appreciated domain of national security policymaking. working to improve the security of software supporting private sector enterprise as well as sensitive defense and intelligence organizations requires more coherent policy response together industry and open source communities. this report profiles attacks and disclosures against the software supply chain from the past decade to highlight the need for action and presents recommendations to both raise the cost of these attacks and limit their harm." july , at : pm david. said... via my friend jim gettys, we learn of a major milestone in the development of a truly reproducible build environment. last june jan nieuwenhuizen posted guix further reduces bootstrap seed to %. the tl;dr is: "gnu mes is closely related to the bootstrappable builds project. mes aims to create an entirely source-based bootstrapping path for the guix system and other interested gnu/linux distributions. the goal is to start from a minimal, easily inspectable binary (which should be readable as source) and bootstrap into something close to r rs scheme. currently, mes consists of a mutual self-hosting scheme interpreter and c compiler. it also implements a c library. mes, the scheme interpreter, is written in about , lines of code of simple c. mescc, the c compiler, is written in scheme. together, mes and mescc can compile a lightly patched tinycc that is self-hosting. using this tinycc and the mes c library, it is possible to bootstrap the entire guix system for i -linux and x _ -linux." the binary they plan to start from is: "our next target will be a third reduction by ~ %; the full-source bootstrap will replace the mescc-tools and gnu mes binaries by stage and m -planet. the stage project by jeremiah orians starts everything from ~ bytes; virtually nothing. have a look at this incredible project if you haven’t already done so." in mid november nieuwenhuizen tweeted: "we just compiled the first working program using a reduced binary seed bootstrap'ped*) tinycc for arm" and on december he tweeted: "the reduced binary seed bootstrap is coming to arm: tiny c builds on @guixhpc wip-arm-bootstrap branch" starting from a working tinycc, you can build the current compiler chain. december , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ▼  december ( ) securing the hardware supply chain meta: impending blog slowdown securing the software supply chain software preservation network blockchain: what's not to like? irina bolychevsky on solid selective amnesia ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. make opinionated software | getting real heads up! this page uses features your browser doesn't support. try a modern browser like firefox or chrome for the best experience. getting real chapter : make opinionated software next: half, not half-assed your app should take sides some people argue software should be agnostic. they say it’s arrogant for developers to limit features or ignore feature requests. they say software should always be as flexible as possible. we think that’s bullshit. the best software has a vision. the best software takes sides. when someone uses software, they’re not just looking for features, they’re looking for an approach. they’re looking for a vision. decide what your vision is and run with it. and remember, if they don’t like your vision there are plenty of other visions out there for people. don’t go chasing people you’ll never make happy. a great example is the original wiki design. ward cunningham and friends deliberately stripped the wiki of many features that were considered integral to document collaboration in the past. instead of attributing each change of the document to a certain person, they removed much of the visual representation of ownership. they made the content ego-less and time-less. they decided it wasn’t important who wrote the content or when it was written. and that has made all the difference. this decision fostered a shared sense of community and was a key ingredient in the success of wikipedia. our apps have followed a similar path. they don’t try to be all things to all people. they have an attitude. they seek out customers who are actually partners. they speak to people who share our vision. you’re either on the bus or off the bus. half, not half-assed → we made basecamp using the principles in this book. it combines all the tools teams need to get work done in a single, streamlined package. with basecamp, everyone knows what to do, where things stand, and where to find things they need. copyright © - basecamp. all rights reserved. back to basecamp.com getting real the smarter, faster, easier way to build a successful web application by basecamp introduction chapter what is getting real chapter about basecamp chapter caveats, disclaimers, and other preemptive strikes the starting line chapter build less chapter what's your problem? chapter fund yourself chapter fix time and budget, flex scope chapter have an enemy chapter it shouldn't be a chore stay lean chapter less mass chapter lower your cost of change chapter the three musketeers chapter embrace constraints chapter be yourself priorities chapter what’s the big idea? chapter ignore details early on chapter it’s a problem when it’s a problem chapter hire the right customers chapter scale later chapter make opinionated software feature selection chapter half, not half-assed chapter it just doesn’t matter chapter start with no chapter hidden costs chapter can you handle it? chapter human solutions chapter forget feature requests chapter hold the mayo process chapter race to running software chapter rinse and repeat chapter from idea to implementation chapter avoid preferences chapter “done!” chapter test in the wild chapter shrink your time the organization chapter unity chapter alone time chapter meetings are toxic chapter seek and celebrate small victories staffing chapter hire less and hire later chapter kick the tires chapter actions, not words chapter get well rounded individuals chapter you can’t fake enthusiasm chapter wordsmiths interface design chapter interface first chapter epicenter design chapter three state solution chapter the blank slate chapter get defensive chapter context over consistency chapter copywriting is interface design chapter one interface code chapter less software chapter optimize for happiness chapter code speaks chapter manage debt chapter open doors words chapter there’s nothing functional about a functional spec chapter don’t do dead documents chapter tell me a quick story chapter use real words chapter personify your product pricing and signup chapter free samples chapter easy on, easy off chapter silly rabbit, tricks are for kids chapter a softer bullet promotion chapter hollywood launch chapter a powerful promo site chapter ride the blog wave chapter solicit early chapter promote through education chapter feature food chapter track your logs chapter inline upsell chapter name hook support chapter feel the pain chapter zero training chapter answer quick chapter tough love chapter in fine forum chapter publicize your screwups post-launch chapter one month tuneup chapter keep the posts coming chapter better, not beta chapter all bugs are not created equal chapter ride out the storm chapter keep up with the joneses chapter beware the bloat monster chapter go with the flow conclusion chapter start your engines jpmorgan prime money market fund-morgan | j.p. morgan asset management welcome funds products mutual funds etfs smartretirement funds portfolios money market funds commingled funds featured funds asset class capabilities fixed income equity multi-asset solutions alternatives global liquidity investment strategies investment approach etf investing model portfolios separately managed accounts sustainable investing variable insurance portfolios commingled pension trust funds college planning college savings plan college planning essentials defined contribution retirement solutions target date strategies full-service (k) solution retirement income insights market insights market insights overview guide to the markets quarterly economic & market update guide to alternatives market updates on the minds of investors principles for successful long-term investing weekly market recap portfolio insights portfolio insights overview asset class views equity fixed income long-term capital market assumptions monthly strategy report sustainable investing retirement insights retirement insights overview guide to retirement principles for a successful retirement defined contribution insights tools portfolio construction portfolio construction tools overview portfolio analysis model portfolios investment comparison bond ladder illustrator defined contribution retirement plan tools & resources overview target date compass® core menu evaluator℠ price smart℠ resources account service forms tax planning news & fund announcements insights app events library about us contact us skip to main content account login login register welcome my collections logout role country shareholder login search menu close search you are about to leave the site close j.p. morgan asset management’s website and/or mobile terms, privacy and security policies don't apply to the site or app you're about to visit. please review its terms, privacy and security policies to see how they apply to you. j.p. morgan asset management isn’t responsible for (and doesn't provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the j.p. morgan asset management name. continue go back j.p. morgan asset management capital gains distributions edelivery fund documents glossary help how to invest important links mutual fund fee calculator accessibility form crs and form adv brochures investment stewardship privacy proxy information senior officer fee summary simple iras site disclaimer terms of use j.p. morgan j.p. morgan jpmorgan chase chase this website is a general communication being provided for informational purposes only. it is educational in nature and not designed to be a recommendation for any specific investment product, strategy, plan feature or other purposes. by receiving this communication you agree with the intended purpose described above. any examples used in this material are generic, hypothetical and for illustration purposes only. none of j.p. morgan asset management, its affiliates or representatives is suggesting that the recipient or any other person take a specific course of action or any action at all. communications such as this are not impartial and are provided in connection with the advertising and marketing of products and services. prior to making any investment or financial decisions, an investor should seek individualized advice from personal financial, legal, tax and other professionals that take into account all of the particular facts and circumstances of an investor's own situation.   opinions and statements of financial market trends that are based on current market conditions constitute our judgment and are subject to change without notice. we believe the information provided here is reliable but should not be assumed to be accurate or complete. the views and strategies described may not be suitable for all investors.   information regarding mutual funds/etf:   investors should carefully consider the investment objectives and risks as well as charges and expenses of a mutual fund or etf before investing. the summary and full prospectuses contain this and other information about the mutual fund or etf and should be read carefully before investing. to obtain a prospectus for mutual funds: contact jpmorgan distribution services, inc. at - - - or download it from this site. exchange traded funds: call - - jpm-etf or download it from this site.   j.p. morgan funds and j.p. morgan etfs are distributed by jpmorgan distribution services, inc., which is an affiliate of jpmorgan chase & co. affiliates of jpmorgan chase & co. receive fees for providing various services to the funds. jpmorgan distribution services, inc. is a member of finra  finra's brokercheck   information regarding commingled funds:   for additional information regarding the commingled pension trust funds of jpmorgan chase bank, n.a., please contact your j.p. morgan asset management representative.   the commingled pension trust funds of jpmorgan chase bank n.a. are collective trust funds established and maintained by jpmorgan chase bank, n.a. under a declaration of trust. the funds are not required to file a prospectus or registration statement with the sec, and accordingly, neither is available. the funds are available only to certain qualified retirement plans and governmental plans and is not offered to the general public. units of the funds are not bank deposits and are not insured or guaranteed by any bank, government entity, the fdic or any other type of deposit insurance. you should carefully consider the investment objectives, risk, charges, and expenses of the fund before investing.   information for all site users:   j.p. morgan asset management is the brand name for the asset management business of jpmorgan chase & co. and its affiliates worldwide.   not fdic insured | no bank guarantee | may lose value   telephone calls and electronic communications may be monitored and/or recorded. personal data will be collected, stored and processed by j.p. morgan asset management in accordance with our privacy policies at https://www.jpmorgan.com/privacy.   if you are a person with a disability and need additional support in viewing the material, please call us at - - - for assistance.    copyright © jpmorgan chase & co., all rights reserved dshr's blog: unstoppable code? dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. tuesday, june , unstoppable code? this is the website of defi , a "decentralized finance" system running on the binance smart chain, after the promoters pulled a $ m exit scam. their message sums up the ethos of cryptocurrencies: the business model of crypto is to provide a platform for crooks to scam muppets without running the risk of jail time. few understand this. https://t.co/vfeyosrkpe — trolly🧻 mctrollface 🌷🥀💩 (@tr llytr llface) may , governments around the world have started to wake up to the fact that this message isn't just for the "muppets", it is also the message of cryptocurrencies for governments and civil society. below the fold i look into how governments might respond. the externalities of cryptocurrencies include: massive carbon emissions. funding "rogue states" such as north korea and iran. tax evasion. laundering the proceeds of crime, including the drug trade, theft and fraud, and armed robbery. an epidemic of ransomware. a wave of securities fraud targeting the greedy and vulnerable. shortages of products including graphics cards, hard disks, and chips in general as limited fab capacity is diverted to mining asics. abuse of free tiers of web services. noise pollution. it seems that recent ransomware attacks, including the may th one on colonial pipeline and the less publicized may st one on scripps health la jolla, have reached a tipping point. kat jerick's scripps health slowly coming back online, weeks after attack reports: "it’s likely that it’s taking a long time because of negotiations going on with the perpetrators, and the prevailing narrative is that they have the contents of the electronic health records system that are being used for 'double extortion,'" said michael hamilton, former chief information security officer for the city of seattle and ciso of healthcare cybersecurity firm ci security, in an email to healthcare it news. if that's true, scripps certainly wouldn't be alone: the healthcare industry saw a number of high-profile ransomware incidents in the last year, including a cyberattack on universal health services that led to a lengthy network shutdown and a $ million loss. more recently, customers of the electronic health record vendor aprima also reported weeks of security-related outages. in response governments are trying to regulate (us) or ban (china, india) cryptocurrencies. the libertarians who designed the technology believed they had made governments irrelevant. for example, the decentralized autonomous organization (dao)'s home page said: the dao's mission: to blaze a new path in business organization for the betterment of its members, existing simultaneously nowhere and everywhere and operating solely with the steadfast iron will of unstoppable code. this was before a combination of vulnerabilities in the underlying code was used to steal its entire contents, about % of all the ether in circulation. if cryptocurrencies are based on the "iron will of unstoppable code" how would regulation or bans work? nicholas weaver explains how his group stopped the plague of viagra spam in the ransomware problem is a bitcoin problem: although they drop-shipped products from international locations, they still needed to process credit card payments, and at the time almost all the gangs used just three banks. this revelation, which was highlighted in a new york times story, resulted in the closure of the gangs’ bank accounts within days of the story. this was the beginning of the end for the spam viagra industry. ... subsequently, any spammer who dared use the “viagra” trademark would quickly find their ability to accept credit cards irrevocably compromised as someone would perform a test purchase to find the receiving bank and then pfizer would send the receiving bank a nastygram. weaver draws the analogy with cryptocurrencies and "big-game" ransomware: these operations target companies instead of individuals, in an attempt to extort millions rather than hundreds of dollars at a time. the revenues are large enough that some gangs can even specialize and develop zero-day vulnerabilities for specialized software. even the cryptocurrency community has noted that ransomware is a bitcoin problem. multimillion-dollar ransoms, paid in bitcoin, now seem to be commonplace. this strongly suggests that the best way to deal with this new era of big-game ransomware will involve not just securing computer systems (after all, you can’t patch against a zero-day vulnerability) or prosecuting (since russia clearly doesn’t care to either extradite or prosecute these criminals). it will also require disrupting the one payment channel capable of moving millions at a time outside of money laundering laws: bitcoin and other cryptocurrencies. ... there are only three existing mechanisms capable of transferring a $ million ransom — a bank-to-bank transfer, cash or cryptocurrencies. no other mechanisms currently exist that can meet the requirements of transferring millions of dollars at a time. the ransomware gangs can’t use normal banking. even the most blatantly corrupt bank would consider processing ransomware payments as an existential risk. my group and i noticed this with the viagra spammers: the spammers’ banks had a choice to either unbank the bad guys or be cut off from the financial system. the same would apply if ransomware tried to use wire transfers. cash is similarly a nonstarter. a $ million ransom is pounds ( kilograms) in $ bills, or two full-weight suitcases. arranging such a transfer, to an extortionist operating outside the u.s., is clearly infeasible just from a physical standpoint. the ransomware purveyors need transfers that don’t require physical presence and a hundred pounds of stuff. this means that cryptocurrencies are the only tool left for ransomware purveyors. so, if governments take meaningful action against bitcoin and other cryptocurrencies, they should be able to disrupt this new ransomware plague and then eradicate it, as was seen with the spam viagra industry. for in the end, we don’t have a ransomware problem, we have a bitcoin problem. i agree with weaver that disrupting the ransomware payment channel is an essential part of a solution to the ransomware problem. it would require denying cryptocurrency exchanges access to the banking system, and global agreement to do this would be hard. given the involvement of major financial institutions and politicians, it would be hard even in the us. so what else could be done? nearly a year ago joe kelly wrote a two-part post explaining how governments could take action against bitcoin (and by extension any proof-of-work blockchain). in the first part, how to kill bitcoin (part ): is bitcoin ‘unstoppable code’? he summarized the crypto-bro's argument: they say bitcoin can’t be stopped. just like there’s no way you can stop two people sending encrypted messages to each other, so — they say — there’s no way you can stop the bitcoin network. there’s no ceo to put on trial, no central server to seize, and no organisation to put pressure on. the bitcoin network is, fundamentally, just people sending messages to each other, peer to peer, and if you knock out node on the network, or even , nodes, the honey badger don’t give a shit: the other , + nodes keep going like nothing happened, and more nodes can come online at any time, anywhere in the world. so there you have it: it’s thousands of people running nodes — running code — and it’s unstoppable… therefore bitcoin is unstoppable code; q.e.d.; case closed; no further questions your honour. this money is above the law, and governments cannot possibly hope to control it, right? the problem with this, as with most of the crypto-bros arguments, is that it applies to the platonic ideal of the decentralized blockchain. in the real world economies of scale mean things aren't quite like the ideal, as kelly explains: it’s not just a network, it’s money. the whole system is held together by a core structure of economic incentives which critically depends on bitcoin’s value and its ability to function for people as money. you can attack this. it’s not just code, it’s physical. proof-of-work mining is a real-world process and, thanks to free-market forces and economies of scale, it results in large, easy-to-find operations with significant energy footprints and no defence. you can attack these. if you can exploit the practical reality of the system and find a way to reduce it to a state of total economic dysfunction, then it doesn’t matter how resilient the underlying peer-to-peer network is, the end result is the same — you have killed bitcoin. kelly explains why the idea of regulating cryptocurrencies is doomed to failure: the entire point of bitcoin is to neutralise government controls on money, which includes aml and taxes. notice that there’s no great technological difficulty in allowing for the completely unrestricted anonymous sending of a fixed-supply money — the barrier is legal and societal, because of the practical consequences of doing that. so the cooperation of the crypto world with the law is a temporary arrangement, and it’s not an honest handshake. the right hand (truthfully) expresses “we will do everything we can to comply” while the left hand is hard at work on the technology which makes that compliance impossible. sure, bitcoin is pretty traceable now, and sometimes it even helps with finding criminals who don’t have the technical savvy to cover their tracks, but you’ll be fighting a losing battle over time as stronger, more convenient privacy tooling gets added to the bitcoin protocol and the wider ecosystem around it. ... so yeah: half measures like aml and censorship aren’t going to cut it. if you want to kill bitcoin, that means taking it out wholesale; it means forcing the system into disequilibrium and inducing economic collapse. in how to kill bitcoin (part ): no can spend kelly explains how a group of major governments could seize control of the majority of the mining power and mount a specific kind of % attack. the basic idea is that governments ban businesses, including exchanges, from transacting in bitcoin, and seize % of the hash rate to mine empty blocks: as it stands, after seizing % of the active hash rate, you can generate proof-of-work hashes at x the speed of the remaining miners around the world. you control ~ exahashes/sec, they control ~ exahashes/sec for every valid block that rebel miners, collectively, can produce on the bitcoin blockchain, you can produce ... you use your limitless advantage to execute the following strategy: mine an empty block — i.e. a block which is perfectly valid but contains no transactions keep – unannounced blocks to yourself — i.e. mine – ‘extra’ empty blocks ahead of where the chain tip is now, but don’t actually share any of these blocks with the network whenever a rebel miner announces a valid block, orphan it (override it) by announcing a longer chain with more cumulative proof-of-work — i.e. announce of your blocks repeat (go back to ) the result of this is that bitcoin transactions are no longer being processed, and you’ve created a black hole of expenditure for rebel miners. every time a rebel miner spends $ to mine a block, it’s money down the drain: they don’t earn any block rewards for it all transactions just sit in the mempool, being (unstoppably) messaged back and forth between nodes, waiting to be included in a block, but they never make it in in other words, no-one can spend their bitcoin, no matter who they are or where they are in the world. empty blocks wouldn't be hard to detect and ignore, but it would be easy for the government miners to fill their blocks with valid transactions between addresses that they control. source things have changed since kelly wrote, in ways that complicate his solution. when he wrote, it was estimated that % of the bitcoin mining power was located in china; china could have implemented kelly's attack unilaterally. but since then china has been gradually increasing the pressure on cryptocurrency miners. this has motivated new mining capacity to set up elsewhere. the graph shows a recent estimate, with only % in china. as david gerard reports: bitcoin mining in china is confirmed to be shutting down — miners are trying to move containers full of mining rigs out of the country as quickly as possible. it’s still not clear where they can quickly put a medium-sized country’s worth of electricity usage. [reuters] ... here’s a twitter thread about the miners getting out of china before the hammer falls. you’ll be pleased to hear that this is actually good news for bitcoin. [twitter] source if the recent estimate is correct, kelly's assumption that a group of governments could seize % of the mining power looks implausible. the best that could be done would be an unlikely agreement between the us and china for %. so for now lets assume that the chinese government is doing this alone with only / of the mining power. because mining is a random process, they are thus only able on average to mine blocks for every one from the "rebel" miners. because bitcoin users would know the blockchain was under attack, they would need to wait several blocks (the advice is ) before regarding a transaction as final. the rebels would have to win six times in a row, with probability . %, for a transaction to go through. the bitcoin network can normally sustain a transaction rate of around k/hr. waiting block times would reduce this to about /hr. even if the requirement was only to wait block times, the rate would be degraded to about /hr, so the attack would greatly reduce the supply of transactions and greatly increase their price. the recent cryptocurrency price crash caused average transaction fees to spike to about $ . in the event of an attack like this hodl-ers would be desperate to sell their bitcoin, so bidding for the very limited supply of transactions would be intense. anyone on the buy-side of these transactions would be making a huge bet that the attack would fail, so the "price" of bitcoin in fiat currencies would collapse. thus the economics for the rebel miners would look bleak. their chances of winning a block reward would be greatly reduced, and the value of any reward they did win would be greatly reduced. there would be little incentive for the rebels to continue spending power to mine doomed blocks, so the cost of the attack for the government would drop rapidly once in place. the cost of the attack is roughly / of . btc/block times block/day. even making the implausible assumption that the price didn't collapse from its current $ k/btc the cost is $ m/day or $ . b/yr. a drop in the bucket for a major government. thus it appears that, until the concentration of mining power in china decreases further, the chinese government could kill bitcoin using kelly's attack. for an analysis of an alternate attack on the bitcoin blockchain see in the economic limits of bitcoin and the blockchain by eric budish. kelly addresses the government attacker: normally what keeps the core structure of incentives in balance in the bitcoin system, and the reason why miners famously can’t dictate changes to the protocol, or collude to double-spend their coins at will, is the fact that for-profit miners have a stake in bitcoin’s future, so they have a very strong disincentive towards using their power to attack the network. in other words, for-profit miners are heavily invested in and very much care about the future value of bitcoin, because their revenue and the value of their mining equipment critically depends on it. if they attack the network and undermine the integrity of bitcoin and its fundamental value proposition to end users, they’re shooting themselves in the foot. you don’t have this problem. in fact this critical variable is flipped on its head: you have a stake in the destruction of bitcoin’s future. you are trying to get the price of btc to $ , and the value of all future block rewards along with it. attacking the network to undermine the integrity of bitcoin and its value proposition to end users is precisely your goal. this fundamentally breaks the game theory and the balance of power in the system, and the result is disequilibrium. in short, bitcoin is based on a mexican standoff security model which only works as a piece of economic design if you start from the assumption that every actor is rational and has a stake in the system continuing to function. that is not a safe assumption. there are two further problems. first, bitcoin is only one, albeit the most important, of almost , cryptocurrencies. second, some of these other cryptocurrencies don't use proof-of-work. ethereum, the second most important cryptocurrency, after nearly seven years work, is promising shortly to transition from proof-of-work to proof-of-stake. the resources needed to perform a % attack on a proof-of-stake blockchain are not physical, and thus are not subject to seizure in the way kelly assumes. i have written before on possible attacks in economic limits of proof-of-stake blockchains, but only in the context of double-spending attacks. i plan to do a follow-up post discussing sabotage attacks on proof-of-stake blockchains once i've caught up with the literature source update: the economist's graphic detail convincingly demonstrates crypto-miners are probably to blame for the graphics-chip shortage. the subhead sums up the graph: secondhand graphics-card prices move nearly in lockstep with those of ethereum the report compares the effect of eth "price" on gpus and cpus: since asking prices for six gpus tracked by keepa have moved in lockstep with ethereum’s value. in late the currency’s first big rally coincided with a surge in listed gpu prices. once the crypto bubble burst, gpu costs fell back to earth. another boom began last year. as ethereum’s price rose from $ in march to $ , last month, the value of mining hardware once again followed suit. in six months, the six gpus’ listed prices climbed by %. those of cpus barely budged. the gpu shortage has hurt data scientists and computer-aided-design users as well as gamers. some relief may be on the way. ethereum’s price is now % below its record high. gpu prices have yet to fall, but if history is any guide, they probably will soon. posted by david. at : am labels: bitcoin comments: david. said... dan goodin's shortages loom as ransomware hamstrings the world’s biggest meat producer reveals the latest cryptocurrency externality: "a ransomware attack has struck the world’s biggest meat producer, causing it to halt some operations in the us, canada, and australia while threatening shortages throughout the world, including up to a fifth of the american supply. brazil-based jbs sa said on monday that it was the target of an organized cyberattack that had affected servers supporting north american and australian it operations. a white house spokeswoman later said the meat producer had been hit by a ransomware attack “from a criminal organization likely based in russia” and that the fbi was investigating." june , at : pm david. said... today's ransomware attacks: - live streams go down across cox radio and tv stations in apparent ransomware attack. - fujifilm becomes the latest victim of a network-crippling ransomware attack. and, christopher bing reports u.s. to give ransomware hacks similar priority as terrorism, official says: "the u.s. department of justice is elevating investigations of ransomware attacks to a similar priority as terrorism in the wake of the colonial pipeline hack and mounting damage caused by cyber criminals, a senior department official told reuters. internal guidance sent on thursday to u.s. attorney’s offices across the country said information about ransomware investigations in the field should be centrally coordinated with a recently created task force in washington. “it’s a specialized process to ensure we track all ransomware cases regardless of where it may be referred in this country, so you can make the connections between actors and work your way up to disrupt the whole chain,” said john carlin, acting deputy attorney general at the justice department." sure, that'll fix the problem. june , at : pm david. said... i missed one of yesterday's ransomware attacks. lawrence abrams reports that uf health florida hospitals back to pen and paper after cyberattack. that is yet another major hospital chain crippled. june , at : am david. said... heather kelly manages to write an entire article entitled ransomware attacks are closing schools, delaying chemotherapy and derailing everyday life without pointing out that ransomware is enabled by cryptocurrencies. june , at : am david. said... william turton and kartikay mehrotra report that hackers breached colonial pipeline using compromised password: "hackers gained entry into the networks of colonial pipeline co. on april through a virtual private network account, ... the account was no longer in use at the time of the attack but could still be used to access colonial’s network, ... the account’s password has since been discovered inside a batch of leaked passwords on the dark web. that means a colonial employee may have used the same password on another account that was previously hacked, ... the vpn account, which has since been deactivated, didn’t use multifactor authentication" three strikes and you're out; unrevoked obsolete account, reused password, no fa. june , at : pm fazal majid said... i was going to suggest ransomware authors might use precious metals as an alternative, but it turns out $ m in palladium is xpd (troy oz) at g each, or kg, basically the same as the suitcase full of $ notes. june , at : pm david. said... the feds understand the importance of disrupting the ransomware payment channel. dan goodin reports that us seizes $ . million colonial pipeline paid to ransomware attackers: "on monday, the us justice department said it had traced . of the roughly bitcoins colonial pipeline paid to darkside, which the biden administration says is likely located in russia. ... fbi deputy director paul m. abbate said at a press conference. "for financially motivated cyber criminals, especially those presumably located overseas, cutting off access to revenue is one of the most impactful consequences we can impose." ... the law enforcement success intensifies speculation that colonial pipeline paid the ransom not to gain access to a decryptor it knew was buggy but rather to help the fbi track darkside and its mechanism for obtaining and laundering ransoms. the speculation is reinforced by the fact that colonial pipeline paid in bitcoin, despite that option requiring an additional percent added to the ransom. bitcoin is pseudo-anonymous, meaning that while names aren't attached to digital wallets, the wallets and the coins they store can still be tracked." criming on an immutable public ledger has risks. this is good news for monero! june , at : am david. said... today's ransomware news includes: - ransomware hits capitol hill contractor by catalin cimpanu: "a company that provides a user engagement platform for us politicians has suffered a ransomware attack, leaving many lawmakers unable to email their constituents for days." - ransomware struck another pipeline firm—and gb of data leaked by andy greenberg: "a group identifying itself as xing team last month posted to its dark web site a collection of files stolen from linestar integrity services, a houston-based company that sells auditing, compliance, maintenance, and technology services to pipeline customers. the data, ... includes , emails, accounting files, contracts, and other business documents, around gb of software code and data, and gb of human resources files that includes scans of employee driver's licenses and social security cards." june , at : pm david. said... there is a fairly reasonable discussion of this post on hacker news. june , at : pm david. said... the new york times reports that jbs the meat processor paid $ million in ransom to hackers. june , at : pm david. said... reuters reports that more chinese provinces issue bans on cryptomining: "authorities in china's northwestern province of qinghai and a district in neighbouring xinjiang ordered cryptocurrency mining projects to close this week, as local governments put into practice beijing's call to crack down on the industry. ... the qinghai office of china's ministry of industry and information technology, on wednesday ordered a ban on new cryptomining projects in the province, and told existing ones to shut down, according to a notice seen by reuters and confirmed by local officials. cryptominers who set up projects claiming to be running big data and super-computing centres will be punished, and companies are barred from providing sites or power supplies to mining activities. the development & reform commission of xinjiang's changji hui prefecture also sent out a notice on wednesday, seen by reuters and confirmed with officials, ordering a cleanup of the sector." june , at : am david. said... wolfie zhao's here's what yunnan is actually doing with bitcoin mining reports that yunnan, where mining is hydro-powered, isn't banning mining explicitly, but it is requiring miners to pay the grid price for power, which could make it uneconomic: "the media report said the yunnan energy bureau is requiring subordinate departments to inspect and then either shut down or rectify bitcoin mining farms that are using unauthorized hydroelectricity. this includes power stations that are directly supplying energy to bitcoin mining farms without paying a profit cut to the government." june , at : pm david. said... it turns out that i have time to work on a post about attacking proof-of-stake blockchains. kai morris reports that buterin explains why ethereum . upgrade won’t arrive until late : "to the disappointment of many however, the shipping of shard chains is not expected until sometime in late , according to ethereum’s latest roadmap. while a transition from pow to pos is expected to take place sometime in / , the inclusion of shard chains is seen by many as the official completion of the ethereum . upgrade. while many have believed the delay was due to the technically-burdensome transition, the actual issue is apparently something different." buterin is blaming his co-workers for the delay in shipping the project they've worked on now for seven years. june , at : pm david. said... venturebeat reports that cybereason: % of orgs that paid the ransom were hit again: "cybereason’s study found that the majority of organizations that chose to pay ransom demands in the past were not immune to subsequent ransomware attacks, often by the same threat actors. in fact, % of organizations that paid the ransom were hit by a second attack, and almost half were hit by the same threat group." june , at : am david. said... danny palmer asks have we reached peak ransomware?. betteridge's law of headlines supplies the answer, no! june , at : am david. said... hannah murphy reports that monero emerges as crypto of choice for cybercriminals: "for cybercriminals looking to launder illicit gains, bitcoin has long been the payment method of choice. but another cryptocurrency is coming to the fore, promising to help make dirty money disappear without a trace. while bitcoin leaves a visible trail of transactions on its underlying blockchain, the niche “privacy coin” monero was designed to obscure the sender and receiver, as well as the amount exchanged. as a result, it has become an increasingly sought-after tool for criminals such as ransomware gangs, posing new problems for law enforcement." june , at : am david. said... strong evidence for the # business case for cryptocurrencies in roxanne henderson and loni prinsloo's south african brothers vanish, and so does $ . billion in bitcoin: "the first signs of trouble came in april, as bitcoin was rocketing to a record. africrypt chief operating officer ameer cajee, the elder brother, informed clients that the company was the victim of a hack. he asked them not to report the incident to lawyers and authorities, as it would slow down the recovery process of the missing funds. some skeptical investors roped in the law firm, hanekom attorneys, and a separate group started liquidation proceedings against africrypt. ... the firm’s investigation found africrypt’s pooled funds were transferred from its south african accounts and client wallets, and the coins went through tumblers and mixers -- or to other large pools of bitcoin -- to make them essentially untraceable." exit scams, they.re what bitcoin was made for. june , at : am david. said... the uk government's glacial approach to regulating cryptocurrency creeps forward, as reuters reports in uk financial watchdog cracks down on cryptocurrency exchange binance: "britain’s financial regulator has ordered binance, one of the world’s largest cryptocurrency exchanges, to stop all regulated activity and issued a warning to consumers about the platform, which is coming under growing scrutiny globally. ... since january, the fca has required all firms offering cryptocurrency-related services to register and show they comply with anti-money laundering rules. however, this month it said that just five firms had registered, and that the majority were not yet compliant." june , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ▼  june ( ) taleb on cryptocurrency economics china's cryptocurrency crackdown dna data storage: a different approach mining is money transmission (updated) mempool flooding meta: apology to commentors unreliability at scale unstoppable code? ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. dshr's blog: the bitcoin "price" dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, january , the bitcoin "price" jemima kelly writes no, bitcoin is not “the ninth-most-valuable asset in the world” and its a must-read. below the fold, some commentary. source the "price" of btc in usd has quadrupuled in the last three months, and thus its "market cap" has sparked claims that it is the th most valuable asset in the world. kelly explains the math: just like you would calculate a company’s market capitalisation by multiplying its stock price by the number of shares outstanding, with bitcoin you just multiply its price by its total “supply” of coins (ie, the number of coins that have been mined since the first one was in january ). simples! if you do that sum, you’ll see that you get to a very large number — if you take the all-time-high of $ , and multiply that by the bitcoin supply (roughly . m) you get to just over $ bn. and, if that were accurate and representative and if you could calculate bitcoin’s value in this way, that would place it just below tesla and alibaba in terms of its “market value”. (on wednesday!) then kelly starts her critique, which is quite different from mine in stablecoins: in the context of companies, the “market cap” can be thought of as loosely representing what someone would have to pay to buy out all the shareholders in order to own the company outright (though in practice the shares have often been over- or undervalued by the market, so shareholders are often offered a premium or a discount). companies, of course, have real-world assets with economic value. and there are ways to analyse them to work out whether they are over- or undervalued, such as price-to-earnings ratios, net profit margins, etc. with bitcoin, the whole value proposition rests on the idea of the network. if you took away the coinholders there would be literally nothing there, and so bitcoin’s value would fall to nil. trying to value it by talking about a “market cap” therefore makes no sense at all. secondly, she takes aim at the circulating btc supply: another problem is that although . m bitcoins have indeed been mined, far fewer can actually be said to be “in circulation” in any meaningful way. for a start, it is estimated that about per cent of bitcoins have been lost in various ways, never to be recovered. then there are the so-called “whales” that hold most of the bitcoin, whose dominance of the market has risen in recent months. the top . per cent of bitcoin addresses now control per cent of the supply (including many that haven’t moved any bitcoin for the past half-decade), and more than per cent of the bitcoin supply hasn’t been moved for the past year, according to recent estimates. the small circulating supply means that btc liquidity is an illusion: the idea that you can get out of your bitcoin position at any time and the market will stay intact is frankly a nonsense. and that’s why the bitcoin religion’s “hodl” mantra is so important to be upheld, of course. because if people start to sell, bad things might happen! and they sometimes do. the excellent crypto critic trolly mctrollface (not his real name, if you’re curious) pointed out on twitter that on saturday a sale of just bitcoin resulted in a per cent drop in the price. and there are a lot of "whales' hodl-ing. if one decides to cash out, everyone will get trampled in the rush for the exits: more than , wallets contain over , bitcoin in them. what would happen to the price if just one of those tried to unload their coins on to the market at once? it wouldn’t be pretty, we would wager. what we call the “bitcoin price” is in fact only the price of the very small number of bitcoins that wash around the retail market, and doesn’t represent the price that . m bitcoins would actually be worth, even if they were all actually available. source note that kelly's critique implictly assumes that btc is priced in usd, not in the mysteriously inflatable usdt. the graph shows that the vast majority of the "very small number of bitcoins that wash around the retail market" are traded for, and thus priced in usdt. so the actual number of bitcoins being traded for real money is a small fraction of a very small number. bitfinex & tether have agreed to comply with the new york supreme court and turn over their financial records to the new york attorney general by th january. if they actually do, and the details of what is actually backing the current stock of nearly billion usdt become known, things could get rather dynamic. as tim swanson explains in parasitic stablecoins, the b usd are notionally in a bank account, and the solvency of that account is not guaranteed by any government deposit insurance. so even if there were a bank account containing b usd, if there is a rush for the exits the bank holding that account could well go bankrupt. to give a sense of scale, the btc sale that crashed the "price" by % represents ( / . ) / = hours of mining reward. if miners were cashing out their rewards, they would be selling btc or $ m/day. in the long term, the lack of barriers to entry means that the margins on mining are small. but in the short term, mining capacity can't respond quickly to large changes in the "price". it certainly can't increase four times in three months. source lets assume that three months ago, when btc≈ , usdt, the btc ecosystem was in equilibrium with the mining rewards plus fees slightly more than the cost of mining. while the btc "price" has quadrupled, the hash rate and thus the cost of mining has oscillated between m and m terahash/s. it hasn't increased significantly, so miners only now need to sell about btc or $ m/day to cover their costs. with the price soaring, they have an incentive to hodl their rewards. posted by david. at : am labels: bitcoin comments: david. said... alex pickard was an early buyer of btc, and became a miner in . but the scales have fallen from his eyes, in bitcoin: magic internet money he explains that btc is useless for anything except speculation: "essentially overnight it became “digital gold” with no use other than for people to buy and hodl ... and hope more people would buy and hodl, and increase the price of btc until everyone on earth sells their fiat currency for btc, and then…? well, what exactly happens then, when btc can only handle about , transactions per day and . billion people need to buy goods and services?" and he is skeptical that tether will survive: "if tether continues as a going concern, and if the rising price of btc is linked to usdt issuance, then btc will likely continue to mechanically build a castle to the sky. i have shown how btc price increases usually follow usdt issuance. in late , when roughly billion usdt were redeemed, the price of btc subsequently fell by over %. now, imagine what would happen if tether received a cease-and-desist order, and its bank accounts were seized. today’s digital gold would definitely lose its luster." january , at : am david. said... the saga of someone trying to turn "crypto" into "fiat". january , at : pm david. said... an anonymous bitcoin hodl-er finally figured out the tether scam and realized his winnings. his must-read account is the bit short: inside crypto’s doomsday machine: "the legitimate crypto exchanges, like coinbase and bitstamp, clearly know to stay far away from tether: neither supports tether on their platforms. and the feeling is mutual! because if tether ltd. were ever to allow a large, liquid market between tethers and usd to develop, the fraud would instantly become obvious to everyone as the market-clearing price of tether crashed far below $ . kraken is the biggest usd-banked crypto exchange on which tether and us dollars trade freely against each other. the market in that trading pair on kraken is fairly modest — about $ m worth of daily volume — and tether ltd. surely needs to keep a very close eye on its movements. in fact, whenever someone sells tether for usd on kraken, tether ltd. has no choice but to buy it — to do otherwise would risk letting the peg slip, and unmask the whole charade. my guess is that maintaining the tether peg on kraken represents the single biggest ongoing capital expense of this entire fraud. if the crooks can’t scrape together enough usd to prop up the tether peg on kraken, then it’s game over, and the whole shambles collapses. and that makes it the fraud’s weak point." january , at : am david. said... tether's bank is deltec, in the bahamas. the anonymous bitcoin hodl-er points out that: "bahamas discloses how much foreign currency its domestic banks hold each month." as of the end of september , all bahamian banks in total held about $ . b usd worth of foreign currency. at that time there were about . b usdt in circulation. even if we assume that deltec held all of it, usdt was only % backed by actual money. january , at : am david. said... david gerard's tether printer go brrrrr — cryptocurrency’s substitute dollar problem collects a lot of nuggets about tether, but also this: "usdc loudly touts claims that it’s well-regulated, and implies that it’s audited. but usdc is not audited — accountants grant thornton sign a monthly attestation that centre have told them particular things, and that the paperwork shows the right numbers. an audit would show for sure whether usdc’s reserve was real money, deposited by known actors — and not just a barrel of nails with a thin layer of gold and silver on top supplied by dubious entities. but, y’know, it’s probably fine and you shouldn’t worry." february , at : pm david. said... in addresses are responsible for % of all cryptocurrency money laundering, catalin cimpanu discusses a report from chainalysis: " , addresses received % of all criminally-linked cryptocurrency funds in , a sum estimated at around $ . billion. ... the company believes that the cryptocurrency-related money laundering field is now in a vulnerable position where a few well-orchestrated law enforcement actions against a few cryptocurrency operators could cripple the movement of illicit funds of many criminal groups at the same time. furthermore, additional analysis also revealed that many of the services that play a crucial role in money laundering operations are also second-tier services hosted at larger legitimate operators. in this case, a law enforcement action wouldn't even be necessary, as convincing a larger company to enforce its anti-money-laundering policies would lead to the shutdown of many of today's cryptocurrency money laundering hotspots." february , at : pm david. said... in bitcoin is now worth $ , — and it's ruining the planet faster than ever, eric holthaus points out the inevitable result of the recent spike in btc: "the most recent data, current as of february from the university of cambridge shows that bitcoin is drawing about . gigawatts of electricity, an annualized consumption of terawatt-hours – about a half-percent of the entire world’s total – or about as much as the entire country of pakistan. since most electricity used to mine bitcoin comes from fossil fuels, bitcoin produces a whopping million tons of carbon dioxide annually, about the same amount as switzerland does by simply existing." february , at : pm david. said... in elon musk wants clean power, but tesla's dealing in environmentally dirty bitcoin notes that: "tesla boss elon musk is a poster child of low-carbon technology. yet the electric carmaker's backing of bitcoin this week could turbocharge global use of a currency that's estimated to cause more pollution than a small country every year. tesla revealed on monday it had bought $ . billion of bitcoin and would soon accept it as payment for cars, sending the price of the cryptocurrency though the roof. ... the digital currency is created via high-powered computers, an energy-intensive process that currently often relies on fossil fuels, particularly coal, the dirtiest of them all." but reuters fails to ask where the $ . b that spiked btc's "price" came from. it wasn't musk's money, it was the tesla shareholder's money. and how did they get it? by selling carbon offsets. so musk is taking subsidies intended to reduce carbon emissions and using them to generate carbon emissions. february , at : pm david. said... one flaw in eric holthaus' bitcoin is now worth $ , — and it's ruining the planet faster than ever is that while he writes: "there are decent alternatives to bitcoin for people still convinced by the potential social benefits of cryptocurrencies. ethereum, the world’s number two cryptocurrency, is currently in the process of converting its algorithm from one that’s fundamentally competitive (proof-of-work, like bitcoin uses) to one that’s collaborative (proof-of-stake), a move that will conserve more than % of its electricity use." he fails to point out that (a) ethereum has been trying to move to proof-of-stake for many years without success, and (b) there are a huge number of other proof-of-work cryptocurrencies that, in aggregate, also generate vast carbon emissions. february , at : pm david. said... four posts worth reading inspired by elon musk's pump-and-hodl of bitcoin. first, jamie powell's tesla and bitcoin: the accounting explains how $ . b of btc will further obscure the underlying business model of tesla. of course, if investors actually understood tesla's business model they might not be willing to support a pe of, currently, , . , so the obscurity may be the reason for the hodl. second, izabella kaminska's what does institutional bitcoin mean? looks at the investment strategies hedge funds like blackrock will use as they "dabble in bitcoin". it involves the btc futures market being in contango and is too complex to extract but well worth reading. third, david gerard's number go up with tether — musk and bitcoin set the world on fire points out that musk's $ . b only covers hours of usdt printing: "tether has given up caring about plausible appearances, and is now printing a billion tethers at a time. as i write this, tether states its reserve as $ , , , . of book value. that’s $ . billion — every single dollar of which is backed by … pinky-swears, maybe? tether still won’t reveal what they’re claiming to constitute backing reserves." in bitcoin's 'elon musk pump' rally to $ k was exclusively driven by whales, joseph young writes: "n recent months, so-called “mega whales” sold large amounts of bitcoin between $ , and $ , . orders ranging from $ million to $ million rose significantly across major cryptocurrency exchanges, including binance. but as the price of bitcoin began to consolidate above $ , after the correction from $ , , the buyer demand from whales surged once again. analysts at “material scientist” said that whales have been showing unusually large volume, around $ million in hours. this metric shows that whales are consistently accumulating bitcoin in the aftermath of the news that tesla bought $ . billion worth of btc." february , at : pm david. said... ethereum consumes about . twh/yr - much less than bitcoin's twh/yr, but still significant. it will continue to waste power until the switch to proof-of-stake, underway for the past years, finally concludes. don't hold your breath. february , at : am david. said... the title of jemima kelly's hey citi, your bitcoin report is embarrassingly bad says all that needs to be said, but her whole post is a fun read. march , at : am david. said... jemima kelley takes citi's embarrassing "bitcoin report" to the woodshed again in the many chart crimes of *that* citi bitcoin report: "not only was this “report” actually just a massive bitcoin-shilling exercise, it also contained some really quite embarrassing errors from what is meant to be one of the top banks in the world (and their “premier thought leadership” division at that). the error that was probably most shocking was the apparent failure of the six citi analysts who authored the report to grasp the difference between basis points and percentage points." march , at : am david. said... adam tooze's talking (and reading) about bitcoin is an economist's view of bitcoin: "to paraphrase gramsci, crypto is the morbid symptom of an interregnum, an interregnum in which the gold standard is dead but a fully political money that dares to speak its name has not yet been born. crypto is the libertarian spawn of neoliberalism’s ultimately doomed effort to depoliticize money." tooze quotes izabella kaminska contrasting the backing of "fiat" by the requirement to pay tax with bitcoin: "private “hackers” routinely raise revenue from stealing private information and then demanding cryptocurrency in return. the process is known as a ransom attack. it might not be legal. it might even be classified as extortion or theft. but to the mindset of those who oppose “big government” or claim that “tax is theft”, it doesn’t appear all that different. a more important consideration is which of these entities — the hacker or a government — is more effective at enforcing their form of “tax collection” upon the system. the government, naturally, has force, imprisonment and the law on its side. and yet, in recent decades, that hasn’t been quite enough to guarantee effective tax collection from many types of individuals or corporations. hackers, at a minimum, seem at least comparably effective at extracting funds from rich individuals or multinational organisations. in many cases, they also appear less willing to negotiate or to cut deals." march , at : am david. said... ibm blockchain is a shell of its former self after revenue misses, job cuts: sources by ian allison is the semi-official death-knell for ibm's hyperledger: "ibm has cut its blockchain team down to almost nothing, according to four people familiar with the situation. job losses at ibm (nyse: ibm) escalated as the company failed to meet its revenue targets for the once-fêted technology by % this year, according to one of the sources." david gerard comments: "hyperledger was a perfect ibm project — a potemkin village open source project, where all the work was done in an ibm office somewhere." march , at : pm david. said... ketan joshi's bitcoin is a mouth hungry for fossil fuels is a righteous rant about cryptocurrencies' energy usage: "i think the story of bitcoin isn’t a sideshow to climate; it’s actually a very significant and central force that will play a major role in dragging down the accelerating pace of positive change. this is because it has an energy consumption problem, it has a fossil fuel industry problem, and it has a deep cultural / ideological problem. all three, in symbiotic concert, position bitcoin to stamp out the hard-fought wins of the past two decades, in climate. years of blood, sweat and tears – in activism, in technological development, in policy and regulation – extinguished by a bunch of bros with laser-eye profile pictures." march , at : am david. said... the externalities of cryptocurrencies, and bitcoin in particular, don't just include ruining the climate, but also ruining the lives of vulnerable elderly who have nothing to do with "crypto". mark rober's fascinating video glitterbomb trap catches phone scammer (who gets arrested) reveals that indian phone scammers transfer their ill-gotten gains from stealing the life savings of elderly victims from the us to india using bitcoin. march , at : pm david. said... the subhead of noah smith's bitcoin miners are on a path to self-destruction is: "producing the cryptocurrency is a massive drain on global power and computer chip supplies. another way is needed before countries balk." march , at : am david. said... in before bitfinex and tether, bennett tomlin pulls together the "interesting" backgrounds of the "trustworthy" people behind bitfinex & tether. march , at : pm david. said... david gerard reports that: "coinbase has had to pay a $ . million fine to the cftc for allowing an unnamed employee to wash-trade litecoin on the platform. on some days, the employee’s wash-trading was % of the litecoin/bitcoin trading pair’s volume. coinbase also operated two trading bots, “hedger and replicator,” which often matched each others’ orders, and reported these matches to the market." as he says: "if coinbase — one of the more regulated exchanges — did this, just think what the unregulated exchanges get up to." especially with the "trustworthy" characters running the unregulated exchanges. march , at : pm david. said... martin c. w. walker and winnie mosioma's regulated cryptocurrency exchanges: sign of a maturing market or oxymoron? examines the (mostly lack of) regulation of exchanges and concludes; "in general, cryptocurrencies lack anyone that is genuinely accountable for core processes such as transfers of ownership, trade validation and creation of cryptocurrencies. a concern that can ultimately only be dealt with by acceptance of the situation or outright bans. however, the almost complete lack of regulation of the highly centralised cryptocurrency exchanges should be an easier-to-fill gap. regulated entities relying on prices from “exchanges” for accounting or calculation of the value of futures contracts are clearly putting themselves at significant risk." coinbase just filed for a $ b direct listing despite just having been fine $ . m forwash-tradding litecoin. april , at : pm david. said... izabella kaminska outlines the the risks underlying coinbase's ipo in why coinbase’s stellar earnings are not what they seem. the sub-head is: "it’s easy to be profitable if your real unique selling point is being a beneficiary of regulatory arbitrage." and she concludes: "coinbase may be a hugely profitable business, but it may also be a uniquely risky one relative to regulated trading venues such as the cme or ice, neither of which are allowed to take principal positions to facilitate liquidity on their platforms. instead, they rely on third party liquidity providers. coinbase, however, is not only known to match client transactions on an internalised “offchain” basis (that is, not via the primary blockchain) but also to square-off residual unmatched positions via bilateral relationships in crypto over-the-counter markets, where it happens to have established itself as a prominent market maker. it’s an ironic state of affairs because the netting processes that are at the heart of this system expose coinbase to the very same risks that real-time gross settlement systems (such as bitcoin) were meant to vanquish." april , at : pm david. said... nathan j. robinson hits the nail on the head with why cryptocurrency is a giant fraud: "you may have ignored bitcoin because the evangelists for it are some of the most insufferable people on the planet—and you may also have kicked yourself because if you had listened to the first guy you met who told you about bitcoin way back, you’d be a millionaire today. but now it’s time to understand: is this, as its proponents say, the future of money?" and: "but as is generally the case when someone is trying to sell you something, the whole thing should seem extremely fishy. in fact, much of the cryptocurrency pitch is worse than fishy. it’s downright fraudulent, promising people benefits that they will not get and trying to trick them into believing in and spreading something that will not do them any good. when you examine the actual arguments made for using cryptocurrencies as currency, rather than just being wowed by the complex underlying system and words like “autonomy,” “global,” and “seamless,” the case for their use by most people collapses utterly. many believe in it because they have swallowed libertarian dogmas that do not reflect how the world actually works." robinson carefully dismantles the idea that cryptocurrencies offer "security", "privacy", "convenience", and many of the other arguments for them. tghe whole article is well worth reading. april , at : pm david. said... rob beschizza reports on the effects of elon musk's cryptocurrency dump: "after elon musk turned on bitcoin, so goes the market. bitcoin lost about % of its value in a few hours before recovering to rest about % down, reports cnn business. julia horowitz writes that it's bad news for crypto in general, with similar falls for ethereum, dogecoin and the rest" may , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ▼  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ▼  january ( ) effort balancing and rate limits isp monopolies the bitcoin "price" two million page views! the new oldweb.today ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. code lib | we are developers and technologists for libraries, museums, and archives who are dedicated to being a diverse and inclusive community, seeking to share ideas and build collaboration. about chat code of conduct conference jobs journal local mailing list planet wiki code lib.org was migrated from drupal to jekyll in june . some links may still be broken. to report issues or help fix see: https://github.com/code lib/code lib.github.io posts nov , code lib sep , code lib aug , code lib apr , code lib journal issue call for papers oct , issue of the code lib journal aug , code lib jul , issue of the code lib journal jun , code lib journal issue call for papers oct , code lib journal # oct , c l : call for presentation/panel proposals oct , code lib jul , code lib journal # apr , code lib journal # sep , jobs.code lib.org studied aug , code lib jul , code lib northern california: stanford, ca jul , code lib journal # apr , code lib journal # : special issue on diversity in library technology mar , code lib will be in philadelphia mar , code lib conference proposals feb , code lib north : st. catharines, on feb , code lib videos jan , code of conduct dec , code lib diversity scholarships dec , your code does not exist in a vacuum dec , your chocolate is in my peanut butter! mixing up content and presentation layers to build smarter books in browsers with rdfa, schema.org, and linked data topics dec , you gotta keep 'em separated: the case for "bento box" discovery interfaces dec , refinery — an open source locally deployable web platform for the analysis of large document collections dec , programmers are not projects: lessons learned from managing humans dec , our $ , problem: why library school? dec , making your digital objects embeddable around the web dec , leveling up your git workflow dec , level up your coding with code club (yes, you can talk about it) dec , how to hack it as a working parent: or, should your face be bathed in the blue glow of a phone at am? dec , helping google (and scholars, researchers, educators, & the public) find archival audio dec , heiðrún: dpla's metadata harvesting, mapping and enhancement system dec , got git? getting more out of your github repositories dec , feminist human computer interaction (hci) in library software dec , dynamic indexing: a tragic solr story dec , docker? vms? ec ? yes! with packer.io dec , digital content integrated with ils data for user discovery: lessons learned dec , designing and leading a kick a** tech team dec , consuming big linked open data in practice: authority shifts and identifier drift dec , byob: build your own bootstrap dec , book reader bingo: which page-turner should i use? dec , beyond open source dec , awesome pi, lol! dec , annotations as linked data with fedora and triannon (a real use case for rdf!) dec , american (archives) horror story: lto failure and data loss dec , a semantic makeover for cms data dec , code lib lighting talks nov , store nov , voting for code lib prepared talks is now open. nov , keynote voting for the conference is now open! sep , code lib : call for proposals sep , code lib north (ottawa): tuesday october th, sep , code libbc: november and , sep , conference schedule jul , code lib journal issue jul , code lib norcal july in san mateo jul , code lib apr , code lib trip report - zahra ashktorab apr , code lib trip report- nabil kashyap apr , code lib trip report - junior tidal apr , code lib trip report - jennifer maiko kishi apr , code lib trip report - j. (jenny) gubernick apr , code lib trip report - emily reynolds apr , code lib trip report - coral sheldon hess apr , code lib trip report - christina harlow apr , code lib trip report - arie nugraha mar , call for proposals: code lib journal, issue feb , code of conduct jan , code lib call for host proposals jan , code lib sponsors jan , websockets for real-time and interactive interfaces jan , we are all disabled! universal web design making web services accessible for everyone jan , visualizing solr search results with d .js for user-friendly navigation of large results sets jan , visualizing library resources as networks jan , under the hood of hadoop processing at oclc research jan , towards pasta code nirvana: using javascript mvc to fill your programming ravioli jan , sustaining your open source project through training jan , structured data now: seeding schema.org in library systems jan , quick and easy data visualization with google visualization api and google chart libraries jan , queue programming -- how using job queues can make the library coding world a better place jan , phantomjs+selenium: easy automated testing of ajax-y uis jan , personalize your google analytics data with custom events and variables jan , organic free-range api development - making web services that you will actually want to consume jan , next generation catalogue - rdf as a basis for new services jan , more like this: approaches to recommending related items using subject headings jan , lucene's latest (for libraries) jan , discovering your discovery system in real time jan , dead-simple video content management: let your filesystem do the work jan , building for others (and ourselves): the avalon media system jan , behold fedora : the incredible shrinking repository! jan , all tiled up jan , a reusable application to enable self deposit of complex objects into a digital preservation environment jan , a book, a web browser and a tablet: how bibliotheca alexandrina's book viewer framework makes it possible jan , conference schedule jan , code lib conference diversity scholarship recipients nov , code lib diversity scholarships (application deadline: dec. , , pm est) nov , code lib keynote speakers sep , code lib jun , code lib conference prospectus for sponsors mar , code lib conference proposals jan , ask anything! dec , code lib call for host proposals dec , the care and feeding of a crowd dec , the avalon media system: a next generation hydra head for audio and video delivery dec , solr update dec , rest is your mobile strategy dec , practical relevance ranking for million books. dec , pitfall! working with legacy born digital materials in special collections dec , n characters in search of an author dec , linked open communism: better discovery through data dis- and re- aggregation dec , hybrid archival collections using blacklight and hydra dec , html video now! dec , hands off! best practices and top ten lists for code handoffs dec , hacking the dpla dec , google analytics, event tracking and discovery tools dec , evolving towards a consortium marcr redis datastore dec , ead without xslt: a practical new approach to web-based finding aids dec , de-sucking the library user experience dec , data-driven documents: visualizing library data with d .js dec , creating a commons dec , citation search in solr and second-order operators dec , browser/javascript integration testing with ruby dec , architecting scholarsphere: how we built a repository app that doesn't feel like yet another janky old repository app dec , all teh metadatas re-revisited dec , actions speak louder than words: analyzing large-scale query logs to improve the research experience nov , code lib scholarship (deadline: december , ) nov , code lib nov , code lib schedule oct , code lib conference call for propoosals sep , keynote voting for the conference is now open! jul , dates set for code lib in chicago may , code lib journal - call for proposals may , ruby-marc . . released apr , code lib journal: editors wanted feb , code lib journal issue is published! feb , ask anything! – facilitated by carmen mitchell- code lib jan , relevance ranking in the scholarly domain - tamar sadeh, phd jan , kill the search button ii - the handheld devices are coming - jørn thøgersen, michael poltorak nielsen jan , stack view: a library browsing tool - annie cain jan , search engine relevancy tuning - a static rank framework for solr/lucene - mike schultz jan , practical agile: what's working for stanford, blacklight, and hydra - naomi dushay jan , nosql bibliographic records: implementing a native frbr datastore with redis - jeremy nelson jan , lies, damned lies, and lines of code per day - james stuart jan , indexing big data with tika, solr & map-reduce - scott fisher, erik hetzner jan , in-browser data storage and me - jason casden jan , how people search the library from a single search box - cory lown jan , discovering digital library user behavior with google analytics - kirk hess jan , building research applications with mendeley - william gunn jan , your ui can make or break the application (to the user, anyway) - robin schaaf jan , your catalog in linked data - tom johnson jan , the golden road (to unlimited devotion): building a socially constructed archive of grateful dead artifacts - robin chandler jan , quick and dirty clean usability: rapid prototyping with bootstrap - shaun ellis jan , “linked-data-ready” software for libraries - jennifer bowen jan , html microdata and schema.org - jason ronallo jan , hathitrust large scale search: scalability meets usability - tom burton-west jan , design for developers - lisa kurt jan , beyond code: versioning data with git and mercurial - charlie collett, martin haye jan , all teh metadatas! or how we use rdf to keep all of the digital object metadata formats thrown at us - declan fleming dec , discussion for elsevier app challenge during code lib dec , so you want to start a kindle lending program dec , code lib call for host proposals nov , code lib scholarship (deadline: december , ) oct , code lib sponsor listing oct , code lib schedule jul , code lib feb , code lib sponsorship jan , vufind beyond marc: discovering everything else - demian katz jan , one week | one tool: ultra-rapid open source development among strangers - scott hanrath jan , letting in the light: using solr as an external search component - jay luker and benoit thiell jan , kuali ole: architecture for diverse and linked data - tim mcgeary and brad skiles jan , keynote address - diane hillmann jan , hey, dilbert. where's my data?! - thomas barker jan , enhancing the mobile experience: mobile library services at illinois - josh bishoff - josh bishoff jan , drupal as rapid application development tool - cary gordon jan , code lib in seattle jan , lightning talks jan , breakout sessions jan , (yet another) home-grown digital library system, built upon open source xml technologies and metadata standards - david lacy jan , why (code ) libraries exist - eric hellman jan , visualizing library data - karen coombs jan , sharing between data repositories - kevin s. clarke jan , practical relevancy testing - naomi dushay jan , opinionated metadata (om): bringing a bit of sanity to the world of xml metadata - matt zumwalt jan , mendeley's api and university libraries: three examples to create value - ian mulvany jan , let's get small: a microservices approach to library websites - sean hannan jan , gis on the cheap - mike graves jan , fiwalk with me: building emergent pre-ingest workflows for digital archival records using open source forensic software - mark m jan , enhancing the performance and extensibility of the xc’s metadataservicestoolkit - ben anderson jan , chicago underground library’s community-based cataloging system - margaret heller and nell taylor jan , building an open source staff-facing tablet app for library assessment - jason casden and joyce chapman jan , beyond sacrilege: a couchapp catalog - gabriel farrell jan , ask anything! – facilitated by dan chudnov jan , a community-based approach to developing a digital exhibit at notre dame using the hydra framework - rick johnson and dan brubak dec , code lib schedule dec , code lib call for host proposals nov , scholarships to attend the code lib conference (deadline dec. , ) sep , code lib sponsorship jun , issue of the code lib journal mar , location of code lib mar , code lib : get ready for the best code lib conference yet! mar , issue of the code lib journal mar , vote on code lib hosting proposals feb , you either surf or you fight: integrating library services with google wave - sean hannan - code lib feb , vampires vs. werewolves: ending the war between developers and sysadmins with puppet - bess sadler - code lib feb , the linked library data cloud: stop talking and start doing - ross singer - code lib feb , taking control of library metadata and websites using the extensible catalog - jennifer bowen - code lib feb , public datasets in the cloud - rosalyn metz and michael b. klein - code lib feb , mobile web app design: getting started - michael doran - code lib feb , metadata editing – a truly extensible solution - david kennedy and david chandek-stark - code lib feb , media, blacklight, and viewers like you (pdf, . mb) - chris beer - code lib feb , matching dirty data – yet another wheel - anjanette young and jeff sherwood - code lib feb , library/mobile: developing a mobile catalog - kim griggs - code lib feb , keynote # : catfish, cthulhu, code, clouds and levenshtein distance - paul jones - code lib feb , keynote # : cathy marshall - code lib feb , iterative development done simply - emily lynema - code lib feb , i am not your mother: write your test code - naomi dushay, willy mene, and jessie keck - code lib feb , how to implement a virtual bookshelf with solr - naomi dushay and jessie keck - code lib feb , hive: a new tool for working with vocabularies - ryan scherle and jose aguera - code lib feb , enhancing discoverability with virtual shelf browse - andreas orphanides, cory lown, and emily lynema - code lib feb , drupal : a more powerful platform for building library applications - cary gordon - code lib feb , do it yourself cloud computing with apache and r - harrison dekker - code lib feb , cloud lib - jeremy frumkin and terry reese - code lib feb , becoming truly innovative: migrating from millennium to koha - ian walls - code lib feb , ask anything! – facilitated by dan chudnov - code lib feb , a better advanced search - naomi dushay and jessie keck - code lib feb , ways to enhance library interfaces with oclc web services - karen coombs - code lib feb , code lib lightning talks feb , code lib breakout sessions feb , code lib participant release form feb , code lib hosting proposals solicited jan , code lib scholarship recipients jan , code lib north dec , scholarships to attend the code lib conference dec , code lib registration dec , conference info dec , code lib schedule dec , code lib sponsorship nov , code lib conference prepared talks voting now open! oct , code lib call for prepared talk proposals sep , vote for code lib keynotes! jul , code lib jun , code lib journal: new issue now available may , visualizing media archives: a case study may , the open platform strategy: what it means for library developers may , if you love something...set it free may , what we talk about when we talk about frbr may , the rising sun: making the most of solr power may , great facets, like your relevance, but can i have links to amazon and google book search? may , freecite - an open source free-text citation parser may , freebasing for fun and enhancement may , extending biblios, the open source web based metadata editor may , complete faceting may , a new platform for open data - introducing ‡biblios.net web services may , sebastian hammer, keynote address may , blacklight as a unified discovery platform may , a new frontier - the open library environment (ole) may , the dashboard initiative may , restafarian-ism at the nla may , open up your repository with a sword! may , lusql: (quickly and easily) getting your data from your dbms into lucene may , like a can opener for your data silo: simple access through atompub and jangle may , libx . may , how i failed to present on using dvcs for managing archival metadata may , djatoka for djummies may , a bookless future for libraries: a comedy in acts may , why libraries should embrace linked data mar , code lib journal: new issue now available feb , see you next year in asheville feb , code lib lightning talks feb , code lib venue voting feb , oclc grid services boot camp ( preconference) feb , code lib hosting proposals jan , code lib logo jan , code lib logo debuts jan , code lib breakout sessions jan , call for code lib hosting proposals jan , code lib scholarship recipients jan , code lib t-shirt design contest dec , code lib registration open! dec , code lib journal issue published dec , code lib gender diversity and minority scholarships dec , calling all code libers attending midwinter dec , logo design process launched dec , code lib schedule dec , pre-conferences nov , voting on presentations for code lib open until december nov , drupal lib unconference ( / / darien, ct) oct , call for proposals, code lib conference oct , ne.code lib.org sep , code lib keynote voting sep , logo? you decide sep , solrpy google code project sep , code lib sep , code lib sponsorship aug , code libnyc aug , update from linkedin jul , linkedin group growing fast jul , code lib group on linkedin apr , elpub open scholarship: authority, community and sustainability in the age of web . mar , code libcon lightning talks mar , brown university to host code lib feb , desktop presenter software feb , presentations from libraryfind pre-conference feb , vote for code lib host! feb , karen coyle keynote - r&d: can resource description become rigorous data? feb , code libcon breakout sessions feb , call for code lib hosting proposals jan , code lib conference t-shirt design jan , code lib registration now open! dec , zotero and you, or bibliography on the semantic web dec , xforms for metadata creation dec , working with the worldcat api dec , using a css framework dec , the wayback machine dec , the making of the code lib journal dec , the code lib future dec , show your stuff, using omeka dec , second life web interoperability - moodle and merlot.org dec , rdf and rda: declaring and modeling library metadata dec , ÖpënÜrl dec , oss web-based cataloging tool dec , marcthing dec , losing sleep over rest? dec , from idea to open source dec , finding relationships in marc data dec , dlf ils discovery interface task force api recommendation dec , delivering library services in the web . environment: osu libraries publishing system for and by librarians dec , couchdb is sacrilege... mmm, delicious sacrilege dec , building the open library dec , building mountains out of molehills dec , a metadata registry dec , code lib gender diversity and minority scholarships dec , conference schedule nov , code lib keynote survey oct , code lib call for proposals oct , code lib schedule jul , code lib conference jul , random #code lib quotes jun , request for proposals: innovative uses of crossref metadata may , library camp nyc, august , apr , code lib - video, audio and podcast available mar , code lib - day video available mar , erik hatcher keynote mar , my adventures in getting data into the archiviststoolkit mar , karen schneider keynote "hurry up please it's time" mar , code lib conference feedback available mar , code lib video trickling in mar , code lib.org restored feb , code lib will be in portland, or feb , code lib blog anthology feb , the intellectual property disclosure process: releasing open source software in academia feb , polling for interest in a european code lib feb , call for proposals to host code lib feb , code lib scholarship recipients feb , delicious! flare + simile exhibit jan , open access self-archiving mandate jan , evergreen keynote jan , code lib t-shirt contest jan , stone soup jan , #code lib logging jan , two scholarships to attend the code lib conference dec , conference schedule now available dec , code lib pre-conference workshop: lucene, solr, and your data dec , traversing the last mile dec , the xquery exposé: practical experiences from a digital library dec , the bibapp dec , smart subjects - application independent subject recommendations dec , open-source endeca in lines or less dec , on the herding of cats dec , obstacles to agility dec , myresearch portal: an xml based catalog-independent opac dec , libraryfind dec , library-in-a-box dec , library data apis abound! dec , get groovy at your public library dec , fun with zeroconfmetaopensearch dec , free the data: creating a web services interface to the online catalog dec , forget the lipstick. this pig just needs social skills. dec , atom publishing protocol primer nov , barton data nov , mit catalog data oct , code lib downtime oct , call for proposals aug , code lib audio aug , book club jul , code libcon site proposals jul , improving code libcon * jun , code lib conference hosting jun , learning to scratch our own itches jun , code lib conference jun , code lib conference schedule jun , code lib conference lightning talks jun , code lib conference breakouts mar , results of the journal name vote mar , #dspace mar , #code lib logging mar , regulars on the #code lib irc channel mar , code lib journal name vote mar , code lib journal: mission, format, guidelines mar , #code lib irc channel faq feb , cufts aim/aol/icq bot feb , code lib journal: draft purpose, format, and guidelines feb , code lib breakout sessions feb , unapi revision feb , code lib presentations will be available feb , planet update feb , weather in corvallis for code lib feb , holiday inn express feb , conference wiki jan , portland hostel jan , lightning talks jan , code lib t-shirt design vote! jan , portland jazz festival jan , unapi version jan , conference schedule in hcalendar jan , code lib t-shirt design contest jan , conference schedule set jan , code lib registration count pool jan , wikid jan , the case for code lib c( ) jan , teaching the library and information community how to remix information jan , practical aspects of implementing open source in armenia jan , lipstick on a pig: ways to improve the sex life of your opac jan , generating recommendations in opacs: initial results and open areas for exploration jan , erp options in an oss world jan , ahah: when good is better than best jan , , lines of code, and other topics from oclc research jan , what blog applications can teach us about library software architecture jan , standards, reusability, and the mating habits of learning content jan , quality metrics jan , library text mining jan , connecting everything with unapi and opa jan , chasing babel jan , anatomy of adore jan , voting on code lib presentation proposals jan , one more week for proposals dec , code lib card dec , planet facelift dec , registration is open dec , planet code lib & blogs dec , code lib call for proposals nov , code lib conference : schedule nov , panizzi nov , drupal installed nov , code lib subscribe via rss code lib code lib code lib code lib.social code lib code lib we are developers and technologists for libraries, museums, and archives who are dedicated to being a diverse and inclusive community, seeking to share ideas and build collaboration. washboard blues - wikipedia washboard blues from wikipedia, the free encyclopedia jump to navigation jump to search single by paul whiteman and his concert orchestra "washboard blues" recording by paul whiteman and his concert orchestra. single by paul whiteman and his concert orchestra a-side "among my souvenirs" written released  ( ) recorded november , genre blues label victor songwriter(s) hoagy carmichael fred b. callahan irving mills "washboard blues" is a popular song written by hoagy carmichael, fred b. callahan and irving mills. on november , , it was recorded in chicago by paul whiteman and his concert orchestra, featuring piano and lead vocals by carmichael, and was released as victor -b (the b-side of "among my souvenirs")[ ] the song is an evocative washerwoman's lament. though the verse, chorus, and bridge pattern is present, the effect of the song is of one long, cohesive melodic line with a dramatic shifting of tempo. the cohesiveness of the long melody perfectly matches the lyrical description of the crushing fatigue resulting from the repetitious work of washing clothes under primitive conditions.[ ] credits[edit] a copy of the lyrics from the indiana university archives of the hoagy carmichael collection credits f. b. callahan with the words to "washboard blues".[ ] references[edit] ^ greenwald, matthew. "washboard blues". allmusic. retrieved march . ^ wilder, alec ( ). american popular song: the great innovators - . new york & oxford: oxford university press. p.  . isbn  - - - . ^ "hoagy carmichael collection". webapp .dlib.indiana.edu. retrieved - - . external links[edit] "washboard blues", paul whiteman ( )—internet archive. "paul whiteman and his concert orchestra* - among my souvenirs / washboard blues (shellac)". discogs.com. retrieved - - . authority control musicbrainz work retrieved from "https://en.wikipedia.org/w/index.php?title=washboard_blues&oldid= " categories: songs with music by hoagy carmichael blues songs songs songs with lyrics by irving mills hidden categories: articles with short description short description is different from wikidata wikipedia articles with musicbrainz work identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages suomi edit links this page was last edited on january , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement dshr's blog: why decentralize? dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, december , why decentralize? in blockchain: hype or hope? (paywalled until june ' ) radia perlman asks what exactly you get in return for the decentralization provided by the enormous resource cost of blockchain technologies? her answer is: a ledger agreed upon by consensus of thousands of anonymous entities, none of which can be held responsible or be shut down by some malevolent government ... [but] most applications would not require or even want this property. two important essays published last february by pioneers in the field provide different answers to perlman's question: vitalik buterin's answer in the meaning of decentralization is that what you get depends on what exactly you mean by "decentralization". nick szabo's answer in money, blockchains, and social scalability is "social scalability" below the fold i try to apply our experience with the decentralized lockss technology to ask whether their arguments hold up. i'm working on a follow-up post based on chelsea barabas, neha narula and ethan zuckerman's defending internet freedom through decentralization from last august, which asks the question specifically about the decentralized web and thus the idea of decentralized storage. buterin the meaning of decentralization is the shorter and more accessible of the two essays. vitalik buterin is a co-founder of ethereum, as one can tell from the links in his essay, which starts by discussing what decentralization means: when people talk about software decentralization, there are actually three separate axes of centralization/decentralization that they may be talking about. while in some cases it is difficult to see how you can have one without the other, in general they are quite independent of each other. the axes are as follows: architectural (de)centralization — how many physical computers is a system made up of? how many of those computers can it tolerate breaking down at any single time? political (de)centralization — how many individuals or organizations ultimately control the computers that the system is made up of? logical (de)centralization— does the interface and data structures that the system presents and maintains look more like a single monolithic object, or an amorphous swarm? one simple heuristic is: if you cut the system in half, including both providers and users, will both halves continue to fully operate as independent units? he notes that: blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer) the global lockss network (gln) is decentralized on all three axes. individual libraries control their own network node; nodes cooperate but do not trust each other; no network operation involves more than a small proportion of the nodes. the clockss network, built from the same technology, is decentralized on the architectural and logical axes, but is centralized on the political axis since all the nodes are owned by the clockss archive. buterin then asks: why is decentralization useful in the first place? there are generally several arguments raised: fault tolerance— decentralized systems are less likely to fail accidentally because they rely on many separate components that are not likely. attack resistance— decentralized systems are more expensive to attack and destroy or manipulate because they lack sensitive central points that can be attacked at much lower cost than the economic size of the surrounding system. collusion resistance — it is much harder for participants in decentralized systems to collude to act in ways that benefit them at the expense of other participants, whereas the leaderships of corporations and governments collude in ways that benefit themselves but harm less well-coordinated citizens, customers, employees and the general public all the time. as regards fault tolerance, i think what buterin meant by "that are not likely" is "that are not likely to suffer common-mode failures", because he goes on to ask: do blockchains as they are today manage to protect against common mode failure? not necessarily. consider the following scenarios: all nodes in a blockchain run the same client software, and this client software turns out to have a bug. all nodes in a blockchain run the same client software, and the development team of this software turns out to be socially corrupted. the research team that is proposing protocol upgrades turns out to be socially corrupted. in a proof of work blockchain, % of miners are in the same country, and the government of this country decides to seize all mining farms for national security purposes. the majority of mining hardware is built by the same company, and this company gets bribed or coerced into implementing a backdoor that allows this hardware to be shut down at will. in a proof of stake blockchain, % of the coins at stake are held at one exchange. his recommendations for improving fault tolerance: are fairly obvious: it is crucially important to have multiple competing implementations. the knowledge of the technical considerations behind protocol upgrades must be democratized, so that more people can feel comfortable participating in research discussions and criticizing protocol changes that are clearly bad. core developers and researchers should be employed by multiple companies or organizations (or, alternatively, many of them can be volunteers). mining algorithms should be designed in a way that minimizes the risk of centralization ideally we use proof of stake to move away from hardware centralization risk entirely (though we should also be cautious of new risks that pop up due to proof of stake). they may be "fairly obvious" but some of them are very hard to achieve in the real world. for example: what matters isn't that there are multiple competing implementations, but rather what fraction of the network's resources use the most common implementation. pointing, as buterin does, to a list of implementations of the ethereum protocol (some of which are apparently abandoned) is interesting but if the majority of mining power runs one of them the network is vulnerable. since one of them is likely to be more efficient than the others, a monoculture is likely to arise. similarly, the employment or volunteer status of the "core developers and researchers" isn't very important if they are vulnerable to the kind of group-think that we see in the bitcoin community. while it is true that the ethereum mining algorithm is designed to enable smaller miners to function, it doesn't address the centralizing force i described in economies of scale in peer-to-peer networks. if smaller mining nodes are more cost-effective, economies of scale will drive the network to be dominated by large collections of smaller mining nodes under unified control. if they aren't more cost-effective, the network will be dominated by collections of larger mining nodes under unified control. either way, you get political centralization and lose attack resistance. the result is, as buterin points out, that the vaunted attack resistance of proof-of-work blockchains like bitcoin's is less than credible: in the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. if any one actor gets more than / of the mining power in a proof of work system, they can gain outsized profits by selfish-mining. however, can we really say that the uncoordinated choice model is realistic when % of the bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? but it turns out that coordination is a double-edged sword: many communities, including ethereum’s, are often praised for having a strong community spirit and being able to coordinate quickly on implementing, releasing and activating a hard fork to fix denial-of-service issues in the protocol within six days. but how can we foster and improve this good kind of coordination, but at the same time prevent “bad coordination” that consists of miners trying to screw everyone else over by repeatedly coordinating % attacks? and this makes resisting collusion hard: collusion is difficult to define; perhaps the only truly valid way to put it is to simply say that collusion is “coordination that we don’t like”. there are many situations in real life where even though having perfect coordination between everyone would be ideal, one sub-group being able to coordinate while the others cannot is dangerous. as tonto said to the lone ranger, "what do you mean we, white man?" our sosp paper showed how, given a large surplus of replicas, the lockss polling protocol made it hard for even a very large collusion among the peers to modify the consensus of the non-colluding peers without detection. the large surplus of replicas allowed each peer to involve a random sample of other peers in each operation. absent an attacker, the result of each operation would be landslide agreement or landslide disagreement. the random sample of peers made it hard for an attacker to ensure that all operations resulted in landslides. alas, this technique has proven difficult to apply in other contexts, which in any case (except for cryotcurrencies) find it difficult to provide a sufficient surplus of replicas. szabo nick szabo was a pioneer of digital currency, sometimes even suspected of being satoshi nakamoto. his essay money, blockchains, and social scalability starts by agreeing with perlman that blockchains are extremely wasteful of resources: blockchains are all the rage. the oldest and biggest blockchain of them all is bitcoin, ... running non-stop for eight years, with almost no financial loss on the chain itself, it is now in important ways the most reliable and secure financial network in the world. the secret to bitcoin’s success is certainly not its computational efficiency or its scalability in the consumption of resources. ... bitcoin’s puzzle-solving hardware probably consumes in total over megawatts of electricity. ... rather than reduce its protocol messages to be as few as possible, each bitcoin-running computer sprays the internet with a redundantly large number of “inventory vector” packets to make very sure that all messages get accurately through to as many other bitcoin computers as possible. as a result, the bitcoin blockchain cannot process as many transactions per second as a traditional payment network such as paypal or visa. szabo then provides a different answer than perlman's to the question "what does bitcoin get in return for this profligate expenditure of resources?" the secret to bitcoin’s success is that its prolific resource consumption and poor computational scalability is buying something even more valuable: social scalability. ... social scalability is about the ways and extents to which participants can think about and respond to institutions and fellow participants as the variety and numbers of participants in those institutions or relationships grow. it's about human limitations, not about technological limitations or physical resource constraints. he measures social scalability thus: one way to estimate the social scalability of an institutional technology is by the number of people who can beneficially participate in the institution. ... blockchains, and in particular public blockchains that implement cryptocurrencies, increase social scalability, even at a dreadful reduction in computational efficiency and scalability. people participate in cryptocurrencies in three ways, by mining, transacting, and hodl-ing. in practice most miners simply passively contribute resources to a few large mining pools in return for a consistent cash flow. chinese day-traders generate the vast majority of bitcoin transactions. % of bitcoin are hodl-ed by a small number of early adopters. none of these are really great social scalability. bitcoin is a scheme to transfer money from many later to a few earlier adopters: bitcoin was substantially mined early on - early adopters have most of the coins. the design was such that early users would get vastly better rewards than later users for the same effort. cashing in these early coins involves pumping up the price, then selling to later adopters, particularly in the bubbles. thus bitcoin was not a ponzi or pyramid scheme, but a pump-and-dump. anyone who bought in after the earliest days is functionally the sucker in the relationship. szabo goes on to discuss the desirability of trust minimization and the impossibility of eliminating the need for trust: trust minimization is reducing the vulnerability of participants to each other’s and to outsiders’ and intermediaries’ potential for harmful behavior. ... in most cases an often trusted and sufficiently trustworthy institution (such as a market) depends on its participants trusting, usually implicitly, another sufficiently trustworthy institution (such as contract law). ... an innovation can only partially take away some kinds of vulnerability, i.e. reduce the need for or risk of trust in other people. there is no such thing as a fully trustless institution or technology. ... the historically recent breakthroughs of computer science can reduce vulnerabilities, often dramatically so, but they are far from eliminating all kinds of vulnerabilities to the harmful behavior of any potential attacker. szabo plausibly argues that the difference between conventional internet services and blockchains is that between matchmaking and trust-minimization: matchmaking is facilitating the mutual discovery of mutually beneficial participants. matchmaking is probably the kind of social scalability at which the internet has most excelled. ... whereas the main social scalability benefit of the internet has been matchmaking, the predominant direct social scalability benefit of blockchains is trust minimization. ... trust in the secret and arbitrarily mutable activities of a private computation can be replaced by verifiable confidence in the behavior of a generally immutable public computation. this essay will focus on such vulnerability reduction and its benefit in facilitating a standard performance beneficial to a wide variety of potential counterparties, namely trust-minimized money. szabo then describes his vision of "trust-minimized money" and its advantages thus: a new centralized financial entity, a trusted third party without a “human blockchain” of the kind employed by traditional finance, is at high risk of becoming the next mt. gox; it is not going to become a trustworthy financial intermediary without that bureaucracy. computers and networks are cheap. scaling computational resources requires cheap additional resources. scaling human traditional institutions in a reliable and secure manner requires increasing amounts accountants, lawyers, regulators, and police, along with the increase in bureaucracy, risk, and stress that such institutions entail. lawyers are costly. regulation is to the moon. computer science secures money far better than accountants, police, and lawyers. given the routine heists from exchanges it is clear that the current bitcoin ecosystem is much less secure than traditional financial institutions. and imagine if the huge resources devoted to running the bitcoin blockchain were instead devoted to additional security for the financial institutions! szabo is correct that: in computer science there are fundamental security versus performance tradeoffs. bitcoin's automated integrity comes at high costs in its performance and resource usage. nobody has discovered any way to greatly increase the computational scalability of the bitcoin blockchain, for example its transaction throughput, and demonstrated that this improvement does not compromise bitcoin’s security. the lockss technology's automated security also comes from using a lot of computational resources, although by doing so it avoids expensive and time-consuming copyright negotiations. but then szabo argues that because of the resource cost and the limited transaction throughput, the best that can be delivered is a reduced level of security for most transactions: instead, a less trust-minimized peripheral payment network (possibly lightning ) will be needed to bear a larger number of lower-value bitcoin-denominated transactions than bitcoin blockchain is capable of, using the bitcoin blockchain to periodically settle with one high-value transaction batches of peripheral network transactions. despite the need for peripheral payment networks, szabo argues: anybody with a decent internet connection and a smart phone who can pay $ . -$ transaction fees – substantially lower than current remitance fees -- can access bitcoin any where on the globe. transaction fees that was then. current transaction fees are in the region of $ , with a median transaction size of about $ k, so the social scalability of bitcoin transactions no longer extends to "anybody with a decent internet connection and a smart phone". as i wrote: to oversimplify, the argument for bitcoin and its analogs is the argument for gold, that because the supply is limited the price will go up. the history of the block size increase shows that the supply of bitcoin transactions is limited to something around per second. so by the same argument that leads to hodl-ing, the cost of getting out when you decide you can't hodl any more will always go up. and, in particular, it will go up the most when you need it the most, when the bubble bursts. szabo's outdated optimism continues: when it comes to small-b bitcoin, the currency, there is nothing impossible about paying retail with bitcoin the way you’d pay with a fiat currency. ... gold can have value anywhere in the world and is immune from hyperinflation because its value doesn’t depend on a central authority. bitcoin excels at both these factors and runs online, enabling somebody in albania to use bitcoin to pay somebody in zimbabwe with minimal trust in or and no payment of quasi-monopoly profits to intermediaries, and with minimum vulnerability to third parties. mining pools / / they'd better be paying many tens of thousands of dollars to make the transaction fees to the quasi-monopoly mining pools (top pools = . % of the mining power) worth the candle. bitcoin just lost % of its "value" in a day, which would count as hyperinflation if it hadn't recently gained % in a day. in practice, they need to trust exchanges. and, as david gerard recounts in chapter , retailers who have tried accepting bitcoin have found the volatility, the uncertainty of transactions succeeding and the delays impossible to live with. szabo's discussion of blockchains has worn better than his discussion of cryptocurrencies. it starts with a useful definition: it is a blockchain if it has blocks and it has chains. the “chains” should be merkle trees or other cryptographic structures with ... post-unforgeable integrity. also the transactions and any other data whose integrity is protected by a blockchain should be replicated in a way objectively tolerant to worst-case malicious problems and actors to as high a degree as possible (typically the system can behave as previously specified up to a fraction of / to / of the servers maliciously trying to subvert it to behave differently). and defines the benefit blockchains provide thus: to say that data is post-unforgeable or immutable means that it can’t be undetectably altered after being committed to the blockchain. contrary to some hype this doesn’t guarantee anything about a datum’s provenance, or its truth or falsity, before it was committed to the blockchain. but this doesn't eliminate the need for (trusted) governance because % or less attacks are possible: and because of the (hopefully very rare) need to update software in a manner that renders prior blocks invalid – an even riskier situation called a hard fork -- blockchains also need a human governance layer that is vulnerable to fork politics. the possibility of % attacks means that it is important to identify who is behind the powerful miners. szabo's earlier "bit gold" was based on his "secure property titles": also like today’s private blockchains, secure property titles assumed and required securely distinguishable and countable nodes. given the objective % hashrate attack limit to some important security goals of public blockchains like bitcoin and ethereum, we actually do care about the distinguishable identity of the most powerful miners to answer the question “can somebody convince and coordinate the %? or the % of the top three pools. identification of the nodes is the basic difference between public and private blockchains: so i think some of the “private blockchains” qualify as bona fide blockchains; others should go under the broader rubric of “distributed ledger” or “shared database” or similar. they are all very different from and not nearly as socially scalable as public and permissionless blockchains like bitcoin and ethereum. all of the following are very similar in requiring an securely identified (distinguishable and countable) group of servers rather than the arbitrary anonymous membership of miners in public blockchains. in other words, they require some other, usually far less socially scalable, solution to the sybil (sockpuppet) attack problem: private blockchains the “federated” model of sidechains (alas, nobody has figured out how to do sidechains with any lesser degree of required trust, despite previous hopes or claims). sidechains can also be private chains, and it’s a nice fit because their architectures and external dependencies (e.g. on a pki) are similar. multisig-based schemes, even when done with blockchain-based smart contracts threshold-based “oracle” architectures for moving off-blockchain data onto blockchains like blockchains, the lockss technology can be used in public (as in the global lockss network) or private (as in the clockss archive) networks. the clockss network identifies its nodes using a public key infrastructure (pki): the dominant, but usually not very socially scalable, way to identify a group of servers is with a pki based on trusted certificate authorities (cas). to avoid the problem that merely trusted third parties are security holes, reliable cas themselves must be expensive, labor-intensive bureaucracies that often do extensive background checks themselves or rely on others (e.g. dun and bradstreet for businesses) to do so. public certificate authorities have proven not trustworthy but private cas are within the trust border of the sponsoring organization. szabo is right that: we need more socially scalable ways to securely count nodes, or to put it another way to with as much robustness against corruption as possible, assess contributions to securing the integrity of a blockchain. but in practice, the ideal picture of blockchains hasn't worked out for bitcoin: that is what proof-of-work and broadcast-replication are about: greatly sacrificing computational scalability in order to improve social scalability. that is satoshi’s brilliant tradeoff. it is brilliant because humans are far more expensive than computers and that gap widens further each year. and it is brilliant because it allows one to seamlessly and securely work across human trust boundaries (e.g. national borders), in contrast to “call-the-cop” architectures like paypal and visa that continually depend on expensive, error-prone, and sometimes corruptible bureaucracies to function with a reasonable amount of integrity. total daily transaction fees with the overhead cost of transactions currently running at well over $ m/day its not clear that "humans are far more expensive than computers". with almost daily reports of thefts over $ m bitcoin lacks "a reasonable amount of integrity" at the level most people interact with it. it is possible that other public blockchain applications might not suffer these problems. but mining blocks needs to be costly for the chain to deter sybil attacks, and these costs need to be covered. so, as i argued in economies of scale in peer-to-peer networks, there has to be an exchange rate between the chains "coins" and the fiat currencies that equipment and power vendors accept. economies of scale will apply, and drive centralization of the network. if the "coins" become, as bitcoins did, channels for flight capital and speculation the network will also become a target for crime. private blockchains escape these problems, but they lack social scalability and have single points of failure; their advantages over more conventional and efficient systems are unclear. posted by david. at : am labels: bitcoin, distributed web, security, techno-hype comments: anonymous said... «szabo's discussion of blockchains has worn better than his discussion of cryptocurrencies. it starts with a useful definition:» i dunno why techies pay so much attention to "blockchain" coins, the issues within etc. have been thoroughly discussed for decades. the only big deal is that a lot of "greater fools" have rushed into pump-and-dump schemes. as to "blockchains" techies are routinely familiar with the linux kernel 'git' crypto blockchain ledger, which was designed precisely to ensure that source code deposits and withdrawals into contributors' accounts were cryptographically secured in a peer-to-peer way to ensure malicious servers could not subvert the kernel source. january , at : am david. said... one-stop counterfeit certificate shops for all your malware-signing needs by dan goodin is an example of why treating certificate authorities as "trusted third parties" is problematic: "a report published by threat intelligence provider recorded future ... identified four such sellers of counterfeit certificates since . two of them remain in business today. the sellers offered a variety of options. in , one provider calling himself c@t advertised certificates that used a microsoft technology known as authenticode for signing executable files and programming scripts that can install software. c@t offered code-signing certificates for macos apps as well. his fee: upwards of $ , per certificate." note that these certificates are not counterfeit, they are real certificates "registered under legitimate corporations and issued by comodo, thawte, and symantec—the largest and most respected issuers". they are the result of corporate identity theft and failures of the verification processes of the issuers. february , at : am david. said... "over , users will have their ssl certificates revoked by tomorrow morning, march , in an incident between two companies —trustico and digicert— that is likely to have a huge impact on the ca (certificate authority) industry as a whole in the coming months." is the start of a catalin cimpanu post. it is a complicated story of certificate authorities behaving badly (who could have imagined?). cimpanu has a useful timeline. the gist is that trustico used to resell certificates from digicert but was switching to resell certificates from comodo. during this spat with digicert, it became obvious that: a) trustico's on-line certificate generation process captured and stored the user's private keys, which is a complete no-no. dan goodin writes: "private keys for tls certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. a ceo being able to attach the keys for , certificates to an email raises troubling concerns that those types of best practices weren't followed. (there's no indication the email was encrypted, either, although neither trustico nor digicert provided that detail when responding to questions.)" b) trustico's approach to website security was inadequate. they had to take their website down: "shortly after a website security expert disclosed a critical vulnerability on twitter that appeared to make it possible for outsiders to run malicious code on trustico servers. the vulnerability, in a trustico.com website feature that allowed customers to confirm certificates were properly installed on their sites, appeared to run as root. by inserting commands into the validation form, attackers could call code of their choice and get it to run on trustico servers with unfettered "root" privileges, the tweet indicated." march , at : am post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ▼  december ( ) why decentralize? updating flash vs. hard disk science friday's "file not found" bad identifiers cliff lynch's stewardship in the "age of algorithms" international digital preservation day ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. andromeda yelton skip to content andromeda yelton menu home about contact resume hamlet lita talks machine learning (ala midwinter ) boston python meetup (august , ) swib libtechconf code lib keynote texas library association online northwest : five conversations about code new jersey esummit (may , ) westchester library association (january , ) bridging the digital divide with mobile services (webjunction, july ) i haven’t failed, i’ve tried an ml approach that *might* work! when last we met i was turning a perfectly innocent neural net into a terribly ineffective one, in an attempt to get it to be better at face recognition in archival photos. i was also (what cultural heritage technology experience would be complete without this?) being foiled by metadata. so, uh, i stopped using metadata. 🤦‍♀️ with twinges of guilt. and full knowledge that i was tossing out a practically difficult but conceptually straightforward supervised learning problem for…what? well. i realized that the work that initially inspired me to try my hand at face recognition in archival photos was not, in fact, a recognition problem but a similarity problem: could the charles teenie harris collection find multiple instances of the same person? this doesn’t require me to identify people, per se; it just requires me to know if they are the same or different. and you know what? i can do a pretty good job of getting different people by randomly selecting two photos from my data set — they’re not guaranteed to be different, but i’ll settle for pretty good. and i can do an actually awesome job of guaranteeing that i have two photos of the same person with the ✨magic✨ of data augmentation. keras (which, by the way, is about a trillionty times better than hand-coding stuff in octave, for all i appreciate that coursera made me understand the fundamentals by doing that) — keras has an imagedatagenerator class which makes it straightforward to alter images in a variety of ways, like horizontal flips, rotations, or brightness changes — all of which are completely plausible ways that archival photos of the same person might differ inter alia! so i can get two photos of the same person by taking one photo, and messing with it. and at this point i have a siamese network with triplet loss, another concept that coursera set me up with (via the deeplearning.ai sequence). and now we are getting somewhere! well. we’re getting somewhere once you realize that, when you make a siamese network architecture, you no longer have layers with the names of your base network; you have one giant layer which is just named vggface or whatever, instead of having all of its constituent layers, and so when you try to set layer.trainable = true whenever the layer name is in a list of names of vggface layers…uh…well…it turns out you just never encounter any layers by that name and therefore don’t set layers to be trainable and it turns out if you train a neural net which doesn’t have any trainable parameters it doesn’t learn much, who knew. but. anyway. once you, after embarrassingly long, get past that, and set layers in the base network to be trainable before you build the siamese network from it… this turns out to work much better! i now have a network which does, in fact, have decreased loss and increased accuracy as it trains. i’m in a space where i can actually play with hyperparameters to figure out how to do this best. yay! …ok, so, does it get me anywhere in practice? well, to test that i think i’m actually going to need a corpus of labeled photos so that i can tell if given, say, one of web du bois, it thinks the most similar photos in the collection are also those of web du bois, which is to say… alas, metadata. andromeda uncategorized leave a comment may , i haven’t failed, i’ve just tried a lot of ml approaches that don’t work “let’s blog every friday,” i thought. “it’ll be great. people can see what i’m doing with ml, and it will be a useful practice for me!” and then i went through weeks on end of feeling like i had nothing to report because i was trying approach after approach to this one problem that simply didn’t work, hence not blogging. and finally realized: oh, the process is the thing to talk about… hi. i’m andromeda! i am trying to make a neural net better at recognizing people in archival photos. after running a series of experiments — enough for me to have written , words of notes — i now have a neural net that is ten times worse at its task. 🎉 and now i have , words of notes to turn into a blog post (a situation which gets harder every week). so let me catch you up on the outline of the problem: download a whole bunch of archival photos and their metadata (thanks, dpla!) use a face detection ml library to locate faces, crop them out, and save them in a standardized way benchmark an off-the-shelf face recognition system to see how good it is at identifying these faces retrain it benchmark my new system step : profit, right? well. let me also catch you up on some problems along the way: alas, metadata archival photos are great because they have metadata, and metadata is like labels, and labels mean you can do supervised learning, right? well…. is he “du bois, w. e. b. (william edward burghardt), - ” or “du bois, w. e. b. (william edward burghardt) - ” or “du bois, w. e. b. (william edward burghardt)” or “w.e.b. du bois”? i mean, these are all options. people have used a lot of different metadata practices at different institutions and in different times. but i’m going to confuse the poor computer if i imply to it that all these photos of the same person are photos of different people. (i have gone through several attempts to resolve this computationally without needing to do everything by hand, with only modest success.) what about “photographs”? that appears in the list of subject labels for lots of things in my data set. “photographs” is a person, right? i ended up pulling in an entire other ml component here — spacy, to do some natural language processing to at least guess which lines are probably names, so i can clear the rest of them out of my way. but spacy only has ~ % accuracy on personal names anyway and, guess what, because everything is terrible, in predictable ways, it has no idea “kweisi mfume” is a person. is a person who appears in the photo guaranteed to be a person who appears in the photo? nope. is a person who appears in the metadata guaranteed to be a person who appears in the photo? also nope! often they’re a photographer or other creator. sometimes they are the subject of the depicted event, but not themselves in the photo. (spacy will happily tell you that there’s personal name content in something like “martin luther king day”, but mlk is unlikely to appear in a photo of an mlk day event.) oh dear, linear algebra ok but let’s imagine for the sake of argument that we live in a perfect world where the metadata is exactly what we need — no more, no less — and its formatting is perfectly consistent. 🦄 here you are, in this perfect world, confronted with a photo that contains two people and has two names. how do you like them apples? i spent more time than i care to admit trying to figure this out. can i bootstrap from photos that have one person and one name — identify those, subtract them out of photos of two people, go from there? (not reliably — there’s a lot of data i never reach that way — and it’s horribly inefficient.) can i do something extremely clever with matrix multiplication? like…once i generate vector space embeddings of all the photos, can i do some sort of like dot-product thing across all of my photos, or big batches of them, and correlate the closest-match photos with overlaps in metadata? not only is this a process which begs the question — i’d have to do that with the ml system i have not yet optimized for archival photo recognition, thus possibly just baking bad data in — but have i mentioned i have taken exactly one linear algebra class, which i didn’t really grasp, in ? what if i train yet another ml system to do some kind of k-means clustering on the embeddings? this is both a promising approach and some really first-rate yak-shaving, combining all the question-begging concerns of the previous paragraph with all the crystalline clarity of black box ml. possibly at this point it would have been faster to tag them all by hand, but that would be admitting defeat. also i don’t have a research assistant, which, let’s be honest, is the person who would usually be doing this actual work. i do have a -year-old and i am strongly considering paying her to do it for me, but to facilitate that i’d have to actually build a web interface and probably learn more about aws, and the prospect of reading aws documentation has a bracing way of reminding me of all of the more delightful and engaging elements of my todo list, like calling some people on the actual telephone to sort out however they’ve screwed up some health insurance billing. nowhere to go but up despite all of that, i did actually get all the way through the steps above. i have a truly, spectacularly terrible neural net. go me! but at a thousand-plus words, perhaps i should leave that story for next week…. andromeda uncategorized comment april , this time: speaking about machine learning no tech blogging this week because most of my time was taken up with telling people about ml instead! one talk for an internal harvard audience, “alice in dataland”, where i explained some of the basics of neural nets and walked people through the stories i found through visualizing hamlet data. one talk for the niso plus conference, “discoverability in an ai world”, about ways libraries and other cultural heritage institutions are using ai both to enhance traditional discovery interfaces and provide new ones. this was recorded today but will be played at the conference on the rd, so there’s still time to register if you want to see it! niso plus will also include a session on ai, metadata, and bias featuring dominique luster, who gave one of my favorite code lib talks, and one on ai and copyright featuring one of my go-to jd/mlses, nancy sims. and i’m prepping for an upcoming talk that has not yet been formally announced. which is to say, i guess, i have a lot of talks about ai and cultural heritage in my back pocket, if you were looking for someone to speak about that 😉 andromeda uncategorized leave a comment february , archival face recognition for fun and nonprofit in , dominique luster gave a super good code lib talk about applying ai to metadata for the charles “teenie” harris collection at the carnegie museum of art — more than , photographs of black life in pittsburgh. they experimented with solutions to various metadata problems, but the one that’s stuck in my head since is the face recognition one. it sure would be cool if you could throw ai at your digitized archival photos to find all the instances of the same person, right? or automatically label them, given that any of them are labeled correctly? sadly, because we cannot have nice things, the data sets used for pretrained face recognition embeddings are things like lots of modern photos of celebrities, a corpus which wildly underrepresents ) archival photos and ) black people. so the results of the face recognition process are not all that great. i have some extremely technical ideas for how to improve this — ideas which, weirdly, some computer science phds i’ve spoken with haven’t seen in the field. so i would like to experiment with them. but i must first invent the universe set up a data processing pipeline. three steps here: fetch archival photographs; do face detection (draw bounding boxes around faces and crop them out for use in the next step); do face recognition. for step , i’m using dpla, which has a super straightforward and well-documented api and an easy-to-use python wrapper (which, despite not having been updated in a while, works just fine with python . , the latest version compatible with some of my dependencies). for step , i’m using mtcnn, because i’ve been following this tutorial. for step , face recognition, i’m using the steps in the same tutorial, but purely for proof-of-concept — the results are garbage because archival photos from mid-century don’t actually look anything like modern-day celebrities. (neural net: “i have % confidence this is stevie wonder!” how nice for you.) clearly i’m going to need to build my own corpus of people, which i have a plan for (i.e. i spent some quality time thinking about numpy) but haven’t yet implemented. so far the gotchas have been: gotcha : if you fetch a page from the api and assume you can treat its contents as an image, you will be sad. you have to treat them as a raw data stream and interpret that as an image, thusly: from pil import image import requests response = requests.get(url, stream=true) response.raw.decode_content = true data = requests.get(url).content image.open(io.bytesio(data)) this code is, of course, hilariously lacking in error handling, despite fetching content from a cesspool of untrustworthiness, aka the internet. it’s a first draft. gotcha : you see code snippets to convert images to pixel arrays (suitable for ai ingestion) that look kinda like this: np.array(image).astype('uint '). except they say astype('float ') instead of astype('uint '). i got a creepy photonegative effect when i used floats. gotcha : although pil was happy to manipulate the .pngs fetched from the api, it was not happy to write them to disk; i needed to convert formats first (image.convert('rgb')). gotcha : the suggested keras_vggface library doesn’t have a pipfile or requirements.txt, so i had to manually install keras and tensorflow. luckily the setup.py documented the correct versions. sadly the tensorflow version is only compatible with python up to . (hence the comment about dpyla compatibility above). i don’t love this, but it got me up and running, and it seems like an easy enough part of the pipeline to rip out and replace if it’s bugging me too much. the plan from here, not entirely in order, subject to change as i don’t entirely know what i’m doing until after i’ve done it: build my own corpus of identified people this means the numpy thoughts, above it also means spending more quality time with the api to see if i can automatically apply names from photo metadata rather than having to spend too much of my own time manually labeling the corpus decide how much metadata i need to pull down in my data pipeline and how to store it figure out some kind of benchmark and measure it try out my idea for improving recognition accuracy benchmark again hopefully celebrate awesomeness andromeda uncategorized leave a comment february , sequence models of language: slightly irksome not much ai blogging this week because i have been buried in adulting all week, which hasn’t left much time for machine learning. sadface. however, i’m in the last week of the last deeplearning.ai course! (well. of the deeplearning.ai sequence that existed when i started, anyway. they’ve since added an nlp course and a gans course, so i’ll have to think about whether i want to take those too, but at the moment i’m leaning toward a break from the formal structure in order to give myself more time for project-based learning.) this one is on sequence models (i.e. “the data comes in as a stream, like music or language”) and machine translation (“what if we also want our output to be a stream, because we are going from a sentence to a sentence, and not from a sentence to a single output as in, say, sentiment analysis”). and i have to say, as a former language teacher, i’m slightly irked. because the way the models work is — ok, consume your input sentence one token at a time, with some sort of memory that allows you to keep track of prior tokens in processing current ones (so far, so okay). and then for your output — spit out a few most-likely candidate tokens for the first output term, and then consider your options for the second term and pick your most-likely two-token pairs, and then consider all the ways your third term could combine with those pairs and pick your most likely three-token sequences, et cetera, continue until done. and that is…not how language works? look at cicero, presuming upon your patience as he cascades through clause after clause which hang together in parallel but are not resolved until finally, at the end, a verb. the sentence’s full range of meanings doesn’t collapse until that verb at the end, which means you cannot be certain if you move one token at a time; you need to reconsider the end in light of the beginning. but, at the same time, that ending token is not equally presaged by all former tokens. it is a verb, it has a subject, and when we reached that subject, likely near the beginning of the sentence, helpfully (in latin) identified by the nominative case, we already knew something about the verb — a fact we retained all the way until the end. and on our way there, perhaps we tied off clause after clause, chunking them into neat little packages, but none of them nearly so relevant to the verb — perhaps in fact none of them really tied to the verb at all, because they’re illuminating some noun we met along the way. pronouns, pointing at nouns. adjectives, pointing at nouns. nouns, suspended with verbs like a mobile, hanging above and below, subject and object. adverbs, keeping company only with verbs and each other. there’s so much data in the sentence about which word informs which that the beam model casually discards. wasteful. and forcing the model to reinvent all these things we already knew — to allocate some of its neural space to re-engineering things we could have told it from the beginning. clearly i need to get my hands on more modern language models (a bizarre sentence since this class is all of years old, but the field moves that fast). andromeda uncategorized comment january , adapting coursera’s neural style transfer code to localhost last time, when making cats from the void, i promised that i’d discuss how i adapted the neural style transfer code from coursera’s convolutional neural networks course to run on localhost. here you go! step : first, of course, download (as python) the script. you’ll also need the nst_utils.py file, which you can access via file > open. step : while the coursera file is in .py format, it’s ipython in its heart of hearts. so i opened a new file and started copying over the bits i actually needed, reading them as i went to be sure i understood how they all fit together. along the way i also organized them into functions, to clarify where each responsibility happened and give it a name. the goal here was ultimately to get something i could run at the command line via python dpla_cats.py, so that i could find out where it blew up in step . step : time to install dependencies. i promptly made a pipenv and, in running the code and finding what importerrors showed up, discovered what i needed to have installed: scipy, pillow, imageio, tensorflow. whatever available versions of the former three worked, but for tensorflow i pinned to the version used in coursera — . . — because there are major breaking api changes with the current ( .x) versions. this turned out to be a bummer, because tensorflow promptly threw warnings that it could be much faster on my system if i compiled it with various flags my computer supports. ok, so i looked up the docs for doing that, which said i needed bazel/bazelisk — but of course i needed a paleolithic version of that for tensorflow . . compat, so it was irritating to install — and then running that failed because it needed a version of java old enough that i didn’t have it, and at that point i gave up because i have better things to do than installing quasi-eoled java versions. updating the code to be compatible with the latest tensorflow version and compiling an optimized version of that would clearly be the right answer, but also it would have been work and i wanted messed-up cat pictures now. (as for the rest of my dependencies, i ended up with scipy== . . , pillow== . . , and imageio== . . , and then whatever sub-dependencies pipenv installed. just in case the latest versions don’t work by the time you read this. 🙂 at this point i had achieved goal , aka “getting anything to run at all”. step : i realized that, honestly, almost everything in nst_utils wanted to be an imageutility, which was initialized with metadata about the content and style files (height, width, channels, paths), and carried the globals (shudder) originally in nst_utils as class data. this meant that my new dpla_cats script only had to import imageutility rather than * (from x import * is, of course, deeply unnerving), and that utility could pingpong around knowing how to do the things it knew how to do, whenever i needed to interact with image-y functions (like creating a generated image or saving outputs) rather than neural-net-ish stuff. everything in nst_utils that properly belonged in an imageutility got moved, step by step, into that class; i think one or two functions remained, and they got moved into the main script. step : ughhh, scope. the notebook plays fast and loose with scope; the raw python script is, rightly, not so forgiving. but that meant i had to think about what got defined at what level, what got passed around in an argument, what order things happened in, et cetera. i’m not happy with the result — there’s a lot of stuff that will fail with minor edits — but it works. scope errors will announce themselves pretty loudly with exceptions; it’s just nice to know you’re going to run into them. step a: you have to initialize the adam optimizer before you run sess.run(tf.global_variables_initializer()). (thanks, stackoverflow!) the error message if you don’t is maddeningly unhelpful. (failedpreconditionerror, i mean, what.) step : argparse! i spent some quality time reading this neural style implementation early on and thought, gosh, that’s argparse-heavy. then i found myself wanting to kick off a whole bunch of different script runs to do their thing overnight investigating multiple hypotheses and discovered how very much i wanted there to be command-line arguments, so i could configure all the different things i wanted to try right there and leave it alone. aw yeah. i’ve ended up with the following: parser.add_argument('--content', required=true) parser.add_argument('--style', required=true) parser.add_argument('--iterations', default= ) # was parser.add_argument('--learning_rate', default= . ) # was . parser.add_argument('--layer_weights', nargs= , default=[ . , . , . , . , . ]) parser.add_argument('--run_until_steady', default=false) parser.add_argument('--noisy_start', default=true) content is the path to the content image; style is the path to the style image; iterations and learning_rate are the usual; layer_weights is the value of style_layers in the original code, i.e. how much to weight each layer; run_until_steady is a bad api because it means to ignore the value of the iterations parameter and instead run until there is no longer significant change in cost; and noisy_start is whether to use the content image plus static as the first input or just the plain content image. i can definitely see adding more command line flags if i were going to be spending a lot of time with this code. (for instance, a layer_names parameter that adjusted what style_layers considered could be fun! or making “significant change in cost” be a user-supplied rather than hardcoded parameter!) step a: correspondingly, i configured the output filenames to record some of the metadata used to create the image (content, style, layer_weights), to make it easier to keep track of which images came from which script runs. stuff i haven’t done but it might be great: updating tensorflow, per above, and recompiling it. the slowness is acceptable — i can run quite a few trials on my macbook overnight — but it would get frustrating if i were doing a lot of this. supporting both num_iterations and run_until_steady means my iterator inside the model_nn function is kind of a mess right now. i think they’re itching to be two very thin subclasses of a superclass that knows all the things about neural net training, with the subclass just handling the iterator, but i didn’t spend a lot of time thinking about this. reshaping input files. right now it needs both input files to be the same dimensions. maybe it would be cool if it didn’t need that. trying different pretrained models! it would be easy to pass a different arg to load_vgg_model. it would subsequently be annoying to make sure that style_layers worked — the available layer names would be different, and load_vgg_model makes a lot of assumptions about how that model is shaped. as your reward for reading this post, you get another cat image! a friend commented that a thing he dislikes about neural style transfer is that it’s allergic to whitespace; it wants to paint everything with a texture. this makes sense — it sees subtle variations within that whitespace and it tries to make them conform to patterns of variation it knows. this is why i ended up with the noisy_start flag; i wondered what would happen if i didn’t add the static to the initial image, so that the original negative space stayed more negative-spacey. this, as you can probably tell, uses the harlem renaissance style image. it’s still allergic to negative space — even without the generated static there are variations in pixel color in the original — but they are much subtler, so instead of saying “maybe what i see is coiled hair?” it says “big open blue patches; we like those”. but the semantics of the original image are more in place — the kittens more kitteny, the card more readable — even though the whole image has been pushed more to colorblocks and bold lines. i find i like the results better without the static — even though the cost function is larger, and thus in a sense the algorithm is less successful. look, one more. superhero! andromeda uncategorized leave a comment january , dear internet, merry christmas; my robot made you cats from the void recently i learned how neural style transfer works. i wanted to be able to play with it more and gain some insights, so i adapted the coursera notebook code to something that works on localhost (more on that in a later post), found myself a nice historical cat image via dpla, and started mashing it up with all manner of images of varying styles culled from dpla’s list of primary source sets. (it really helped me that these display images were already curated for looking cool, and cropped to uniform size!) these sweet babies do not know what is about to happen to them. let’s get started, shall we? style image from the fake news in the s: yellow journalism primary source set. i really love how this one turned out. it’s pulled the blue and yellow colors, and the concerned face of the lower kitten was a perfect match for the expression on the right-hand muckraker. the lines of the card have taken on the precise quality of those in the cartoon — strong outlines and textured interiors. “merry christmas” the bird waves, like an eager newsboy. style image from the food and social justice exhibit. this is one of the first ones i made, and i was delighted by how it learned the square-iness of its style image. everything is more snapped to a grid. the colors are bolder, too, cueing off of that dominant yellow. the christmas banner remains almost readable and somehow heraldic. style image from the truth, justice, and the american way primary source set. how about christmas of steel? these kittens have broadly retained their shape (perhaps as the figures in the comic book foreground have organic detail?), but the background holly is more polygon-esque. the colors have been nudged toward primary, and the static of the background has taken on a swirl of dynamic motion lines. style image from the visual art during the harlem renaissance primary source set. how about starting with something boldly colored and almost abstract? why look: the kittens have learned a world of black and white and blue, with the background transformed into that stippled texture it picked up from the hair. the holly has gone more colorblocky and the lines bolder. style image from the treaty of versailles and the end of world war i primary source set. this one learned its style so aptly that i couldn’t actually tell where the boundary between the second and third images was when i was placing that equals sign. the soft pencil lines, the vertical textures of shadows and jail bars, the fact that all the colors in the world are black and white and orange (the latter mostly in the middle) — these kittens are positively melting before the force of wilsonian propaganda. imagine them in the hall of mirrors, drowning in gold and reflecting back at you dozens of times, for full nightmare effect. style image from the victorian era primary source set. shall we step back a few decades to something slightly more calming? these kittens have learned to take on soft lines and swathes of pale pink. the holly is perfectly happy to conform itself to the texture of these new england trees. the dark space behind the kittens wonders if, perhaps, it is meant to be lapels. i totally can’t remember how i found this cropped version of us food propaganda. and now for kittens from the void. brown, it has learned. the world is brown. the space behind the kittens is brown. those dark stripes were helpfully already brown. the eyes were brown. perhaps they can be the same brown, a hole dropped through kitten-space. i thought this was honestly pretty creepy, and i wondered if rerunning the process with different layer weights might help. each layer of the neural net notices different sorts of things about its image; it starts with simpler things (colors, straight lines), moves through compositions of those (textures, basic shapes), and builds its way up to entire features (faces). the style transfer algorithm looks at each of those layers and applies some of its knowledge to the generated image. so i thought, what if i change the weights? the initial algorithm weights each of five layers equally; i reran it weighted toward the middle layers and entirely ignoring the first layer, in hopes that it would learn a little less about gaping voids of brown. same thing, less void. this worked! there’s still a lot of brown, but the kitten’s eye is at least separate from its facial markings. my daughter was also delighted by how both of these images want to be letters; there are lots of letter-ish shapes strewn throughout, particularly on the horizontal line that used to be the edge of a planter, between the lower cat and the demon holly. so there you go, internet; some christmas cards from the nightmare realm. may bring fewer nightmares to us all. andromeda uncategorized comment december , december , this week in my ai after visualizing a whole bunch of theses and learning about neural style transfer and flinging myself at t-sne i feel like i should have something meaty this week but they can’t all be those weeks, i guess. still, i’m trying to hold myself to friday ai blogging, so here are some work notes: finished course of the deeplearning.ai sequence. yay! the facial recognition assignment is kind of buggy and poorly documented and i felt creepy for learning it in the first place, but i’m glad to have finished. only one more course to go! it’s a -week course, so if i’m particularly aggressive i might be able to get it all done by year’s end. tried making a d version of last week’s visualization — several people had asked — but it turned out to not really add anything. oh well. been thinking about charlie harper’s talk at swib this year, generating metadata subject labels with doc vec and dbpedia. this talk really grabbed me because he started with the exact same questions and challenges as hamlet — seriously, the first seven and a half minutes of this talk could be the first seven and a half minutes of a talk on hamlet, essentially verbatim — but took it off in a totally different direction (assigning subject labels). i have lots of ideas about where one might go with this but right now they are all sparkling voronoi diagrams in my head and that’s not a language i can readily communicate. all done with the second iteration of my ai for librarians course. there were some really good final projects this term. yay, students! andromeda uncategorized comment december , december , though these be matrices, yet there is method in them. when i first trained a neural net on , theses to make hamlet, one of the things i most wanted to do is be able to visualize them. if word vec places documents ‘near’ each other in some kind of inferred conceptual space, we should be able to see some kind of map of them, yes? even if i don’t actually know what i’m doing? turns out: yes. and it’s even better than i’d imagined. , graduate theses, arranged by their conceptual similarity. let me take you on a tour! region is biochemistry. the red dots are biology; the orange ones, chemistry. theses here include positional cloning and characterization of the mouse pudgy locus and biosynthetic engineering for the assembly of better drugs. if you look closely, you will see a handful of dots in different colors, like a buttery yellow. this color is electrical engineering & computer science, and its dots in this region include computational regulatory genomics : motifs, networks, and dynamics — that is to say, a computational biology thesis that happens to have been housed in computation rather than biology. the green south of region is physics. but you will note a bit of orange here. yes, that’s chemistry again; for example, dynamic nuclear polarization of amorphous and crystalline small molecules. if (like me), you almost majored in chemistry and realized only your senior year that the only chemistry classes that interested you were the ones that were secretly physics…this is your happy place. in fact, most of the theses here concern nuclear magnetic resonance applications. region has a striking vertical green stripe which turns out to be the nuclear engineering department. but you’ll see some orange streaks curling around it like fingers, almost suggesting three-dimensional depth. i point this out as a reminder that the original neural net embeds these , documents in a -dimensional space; i have projected that down to dimensions because i don’t know about you but i find dimensions somewhat challenging to visualize. however — just as objects may overlap in a -dimensional photo even when they are quite distant in -dimensional space — dots that are close together in this projection may be quite far apart in reality. trust the overall structure more than each individual element. the map is not the territory. that little yellow thumb by region is mathematics, now a tiny appendage off of the giant discipline it spawned — our old friend buttery yellow, aka electrical engineering & computer science. if you zoom in enough you find eecs absolutely everywhere, applied to all manner of disciplines (as above with biology), but the bulk of it — including the quintessential parts, like compilers — is right here. dramatically red region , clustered together tightly and at the far end, is architecture. this is a renowned department (it graduated i.m. pei!), but definitely a different sort of creature than most of mit, so it makes sense that it’s at one extreme of the map. that said, the other two programs in its school — urban studies & planning and media arts & sciences — are just to its north. region — tiny, yellow, and pale; you may have missed it at first glance — is linguistics island, housing theses such as topics in the stress and syntax of words. you see how there are also a handful of red dots on this island? they are brain & cognitive science theses — and in particular, ones that are secretly linguistics, like intonational phrasing in language production and comprehension. similarly — although at mit it is not the department of linguistics, but the department of linguistics & philosophy — the philosophy papers are elsewhere. (a few of the very most abstract ones are hanging out near math.) and what about region , the stingray swimming vigorously away from everything else? i spent a long time looking at this and not seeing a pattern. you can tell there’s a lot of colors (departments) there, randomly assorted; even looking at individual titles i couldn’t see anything. only when i looked at the original documents did i realize that this is the island of terrible ocr. almost everything here is an older thesis, with low-quality printing or even typewriting, often in a regrettable font, maybe with the reverse side of the page showing through. (a randomly chosen example; pdf download.) a good reminder of the importance of high-quality digitization labor. a heartbreaking example of the things we throw away when we make paper the archival format for born-digital items. and also a technical inspiration — look how much vector space we’ve had to carve out to make room for these! the poor neural net, trying desperately to find signal in the noise, needing all this space to do it. i’m tempted to throw out the entire leftmost quarter of this graph, rerun the d projection, and see what i get — would we be better able to see the structures in the high-quality data if they had room to breathe? and were i to rerun the entire neural net training process again, i’d want to include some sort of threshhold score for ocr quality. it would be a shame to throw things away — especially since they will be a nonrandom sample, mostly older theses — but i have already had to throw away things i could not ocr at all in an earlier pass, and, again, i suspect the neural net would do a better job organizing the high-quality documents if it could use the whole vector space to spread them out, rather than needing some of it to encode the information “this is terrible ocr and must be kept away from its fellows”. clearly i need to share the technical details of how i did this, but this post is already too long, so maybe next week. tl;dr i reached out to matt miller after reading his cool post on vectorizing the dpla and he tipped me off to umap and here we are — thanks, matt! and just as clearly you want to play with this too, right? well, it’s super not ready to be integrated into hamlet due to any number of usability issues but if you promise to forgive me those — have fun. you see how when you hover over a dot you get a label with the format . -x.txt? it corresponds to a url of the format https://hamlet.andromedayelton.com/similar_to/x. go play :). andromeda uncategorized comments december , december , of such stuff are (deep)dreams made: convolutional networks and neural style transfer skipped fridai blogging last week because of thanksgiving, but let’s get back on it! top-of-mind today are the firing of ai queen timnit gebru (letter of support here) and a couple of grant applications that i’m actually eligible for (this is rare for me! i typically need things for which i can apply in my individual capacity, so it’s always heartening when they exist — wish me luck). but for blogging today, i’m gonna talk about neural style transfer, because it’s cool as hell. i started my ml-learning journey on coursera’s intro ml class and have been continuing with their deeplearning.ai sequence; i’m on course of there, so i’ve just gotten to neural style transfer. this is the thing where a neural net outputs the content of one picture in the style of another: via https://medium.com/@build_it_for_fun/neural-style-transfer-with-swift-for-tensorflow-b b . ok, so! let me explain while it’s still fresh. if you have a neural net trained on images, it turns out that each layer is responsible for recognizing different, and progressively more complicated, things. the specifics vary by neural net and data set, but you might find that the first layer gets excited about straight lines and colors; the second about curves and simple textures (like stripes) that can be readily composed from straight lines; the third about complex textures and simple objects (e.g. wheels, which are honestly just fancy circles); and so on, until the final layers recognize complex whole objects. you can interrogate this by feeding different images into the neural net and seeing which ones trigger the highest activation in different neurons. below, each × grid represents the most exciting images for a particular neuron. you can see that in this network, there are layer neurons excited about colors (green, orange), and about lines of particular angles that form boundaries between dark and colored space. in layer , these get built together like tiny image legos; now we have neurons excited about simple textures such as vertical stripes, concentric circles, and right angles. via https://adeshpande .github.io/the- -deep-learning-papers-you-need-to-know-about.html, originally from zeller & fergus, visualizing and understanding convolutional networks so how do we get from here to neural style transfer? we need to extract information about the content of one image, and the style of another, in order to make a third image that approximates both of them. as you already expect if you have done a little machine learning, that means that we need to write cost functions that mean “how close is this image to the desired content?” and “how close is this image to the desired style?” and then there’s a wrinkle that i haven’t fully understood, which is that we don’t actually evaluate these cost functions (necessarily) against the outputs of the neural net; we actually compare the activations of the neurons, as they react to different images — and not necessarily from the final layer! in fact, choice of layer is a hyperparameter we can vary (i super look forward to playing with this on the coursera assignment and thereby getting some intuition). so how do we write those cost functions? the content one is straightforward: if two images have the same content, they should yield the same activations. the greater the differences, the greater the cost (specifically via a squared error function that, again, you may have guessed if you’ve done some machine learning). the style one is beautifully sneaky; it’s a measure of the difference in correlation between activations across channels. what does that mean in english? well, let’s look at the van gogh painting, above. if an edge detector is firing (a boundary between colors), then a swirliness detector is probably also firing, because all the lines are curves — that’s characteristic of van gogh’s style in this painting. on the other hand, if a yellowness detector is firing, a blueness detector may or may not be (sometimes we have tight parallel yellow and blue lines, but sometimes yellow is in the middle of a large yellow region). style transfer posits that artistic style lies in the correlations between different features. see? sneaky. and elegant. finally, for the style-transferred output, you need to generate an image that does as well as possible on both cost functions simultaneously — getting as close to the content as it can without unduly sacrificing the style, and vice versa. as a side note, i think i now understand why deepdream is fixated on a really rather alarming number of eyes. since the layer choice is a hyperparameter, i hypothesize that choosing too deep a layer — one that’s started to find complex features rather than mere textures and shapes — will communicate to the system, yes, what i truly want is for you to paint this image as if those complex features are matters of genuine stylistic significance. and, of course, eyes are simple enough shapes to be recognized relatively early (not very different from concentric circles), yet ubiquitous in image data sets. so…this is what you wanted, right? the eager robot helpfully offers. https://www.ucreative.com/inspiration/google-deep-dream-is-the-trippiest-thing-in-the-internet/ i’m going to have fun figuring out what the right layer hyperparameter is for the coursera assignment, but i’m going to have so much more fun figuring out the wrong ones. andromeda uncategorized comments december , december , posts navigation older posts create a free website or blog at wordpress.com.   loading comments...   write a comment... email (required) name (required) website privacy & cookies: this site uses cookies. by continuing to use this website, you agree to their use. to find out more, including how to control cookies, see here: cookie policy the distant reader - project gutenberg to study carrel browse carrels browse public carrels create carrels new carrel from project gutenberg from cord- sign in home new carrel create carrel from a... file zip file url list of urls biorxiv file hathitrust file gutenberg cord- gutenbergcarrel use this page to: ) search a subset ( , items) of the venerable project gutenberg, ) refine the results, and ) create a study carrel from the list of found items. the content of project gutenberg is strong on english literature, american literature, and western philosophy, but just about any word or phrase will return something. this is a quick an easy way to create carrels whose content represents the complete works of a given author or an introduction a given subject. enter a word, a few words, or a few words surrounded by quote marks to begin. the index supports an expressive query language, described in a blog posting. stymied? search for everything and use the resulting page to limit your query. query search with thanks to our generous sponsors and partners about this project acknowledgments contact us attorney general james ends virtual currency trading platform bitfinex’s illegal activities in new york | new york state attorney general skip to main content our office bio of the attorney general year in review divisions and bureaus regional offices regional office contact information media press releases event archive livestream resources charities registry complaint forms consumer resources data security breach information effective ref policy memoranda employment opportunities faqs find an attorney forms help for homeowners identity theft lemon law protections make a foil request offering plan data search opinions presentation request form publications registrations student lending tenants’ rights triple c awards victims’ rights initiatives animal protection initiative conviction review bureau cuffs debt settlement & collection tips on debt settlement companies debt collection companies process servers: know your rights bureau of consumer frauds & protection free educational programs human trafficking initiative immigration services fraud initiative land bank community revitalization land bank report ny open government pennies for charity protect our homes smart seniors office of special investigation source of income discrimination taxpayer protection initiative contact us search you are here home » media center » press releases » february rd local menu attorney general james ends virtual currency trading platform bitfinex’s illegal activities in new york bitfinex and tether must submit to mandatory reporting on efforts to stop new york trading bitfinex and tether deceived clients and market by overstating reserves, hiding approximately $ million in losses around the globe  new york – new york attorney general letitia james today continued her efforts to protect investors from fraudulent and deceptive virtual or “crypto” currency trading platforms by requiring bitfinex and tether to end all trading activity with new yorkers. millions around the country and the world today use virtual currencies as decentralized digital currencies — unlike real, regulated government currencies, including the u.s. dollar — to buy goods and services, often times anonymously, through secure online transactions. stablecoins, specifically, are virtual currencies that are always supposed to have the same real-dollar value. in the case of tether, the company represented that each of its stablecoins were backed one-to-one by u.s. dollars in reserve. however, an investigation by the office of the attorney general (oag) found that ifinex — the operator of bitfinex — and tether made false statements about the backing of the “tether” stablecoin, and about the movement of hundreds of millions of dollars between the two companies to cover up the truth about massive losses by bitfinex. an agreement with ifinex, tether, and their related entities will require them to cease any further trading activity with new yorkers, as well as force the companies to pay $ . million in penalties, in addition to requiring a number of steps to increase transparency. “bitfinex and tether recklessly and unlawfully covered-up massive financial losses to keep their scheme going and protect their bottom lines,” said attorney general james. “tether’s claims that its virtual currency was fully backed by u.s. dollars at all times was a lie. these companies obscured the true risk investors faced and were operated by unlicensed and unregulated individuals and entities dealing in the darkest corners of the financial system. this resolution makes clear that those trading virtual currencies in new york state who think they can avoid our laws cannot and will not. last week, we sued to shut down coinseed for its fraudulent conduct. this week, we’re taking action to end bitfinex and tether’s illegal activities in new york. these legal actions send a clear message that we will stand up to corporate greed whether it comes out of a traditional bank, a virtual currency trading platform, or any other type of financial institution.” a stablecoin without stability – tethers weren’t fully backed at all times the oag’s investigation found that, starting no later than mid- , tether had no access to banking, anywhere in the world, and so for periods of time held no reserves to back tethers in circulation at the rate of one dollar for every tether, contrary to its representations. in the face of persistent questions about whether the company actually held sufficient funds, tether published a self-proclaimed ‘verification’ of its cash reserves, in , that it characterized as “a good faith effort on our behalf to provide an interim analysis of our cash position.” in reality, however, the cash ostensibly backing tethers had only been placed in tether’s account as of the very morning of the company’s ‘verification.’ on november , , tether publicized another self-proclaimed ‘verification’ of its cash reserve; this time at deltec bank & trust ltd. of the bahamas. the announcement linked to a letter dated november , , which stated that tethers were fully backed by cash, at one dollar for every one tether. however, the very next day, on november , , tether began to transfer funds out of its account, ultimately moving hundreds of millions of dollars from tether’s bank accounts to bitfinex’s accounts. and so, as of november , — one day after their latest ‘verification’ — tethers were again no longer backed one-to-one by u.s. dollars in a tether bank account.  as of today, tether represents that over billion tethers have been issued and are outstanding and traded in the market. when no bank backs you, turn to shady entities — bitfinex hid massive losses in and , bitfinex began to increasingly rely on third-party “payment processors” to handle customer deposits and withdrawals from the bitfinex trading platform. in , while attempting to “move money [more] efficiently,” bitfinex suffered a massive and undisclosed loss of funds because of its relationship with a purportedly panama-based entity known as “crypto capital corp.” bitfinex responded to pervasive public reports of liquidity problems by misleading the market and its own clients. on october , , bitfinex claimed to “not entirely understand the arguments that purport to show us insolvent,” when, for months, its executives had been pleading with crypto capital to return almost a billion dollars in assets. on april , — after the oag revealed in court documents that approximately $ million had gone missing and that bitfinex and tether had been misleading their clients — the company issued a false statement that “we have been informed that these crypto capital amounts are not lost but have been, in fact, seized and safeguarded.” the reality, however, was that bitfinex did not, in fact, know the whereabouts of all of the customer funds held by crypto capital, and so had no such assurance to make.  the oag investigation shines a light on unlawful trading in new york state from the beginning of its interaction with the oag, ifinex and tether falsely claimed that they did not allow trading activity by new yorkers. the oag investigation determined that to be untrue and that the companies have operated for years as unlicensed and unregulated entities, illegally trading virtual currencies in the state of new york. in april , the oag sought and obtained an injunction against further transfers of assets between and among bitfinex and tether, which are owned and controlled by the same small group of individuals. that action — under section of new york’s martin act — ultimately led to a july decision by the new york state appellate division of the supreme court, first department, holding that: bitfinex and tether — and other virtual currency trading platforms and cryptocurrencies operating from various locations around the world — are still subject to oag jurisdiction if doing business in new york; the stablecoin “tether” and other virtual currencies were “commodities” under section of the martin act, and noted that virtual currencies may also constitute securities under the act; and the oag had established the factual predicate necessary to uphold the injunction and require production of documents and information relevant to its investigation in advance of the filing of a formal suit. bitfinex and tether banned from continuing illegal activities in new york today’s agreement requires bitfinex and tether to discontinue any trading activity with new yorkers. in addition, these companies must submit regular reports to the oag to ensure compliance with this prohibition. further, the companies must submit to mandatory reporting on core business functions. specifically, both bitfinex and tether will need to report, on a quarterly basis, that they are properly segregating corporate and client accounts, including segregation of government-issued and virtual currency trading accounts by company executives, as well as submit to mandatory reporting regarding transfers of assets between and among bitfinex and tether entities. additionally, tether must offer public disclosures, by category, of the assets backing tethers, including disclosure of any loans or receivables to or from affiliated entities. the companies will also provide greater transparency and mandatory reporting regarding the use of non-bank “payment processors” or other entities used to transmit client funds. finally, bitfinex and tether will be required to pay $ . million in penalties to the state of new york.   in september , the oag issued its virtual markets integrity initiative report, which highlighted the “substantial potential for conflicts between the interests” of virtual currency trading platforms, insiders, and issuers. bitfinex was one of the trading platforms examined in the report.    this matter was handled by senior enforcement counsel john d. castiglione and assistant attorneys general brian m. whitehurst and tanya trakht of the investor protection bureau; assistant attorneys general ezra sternstein and johanna skrzypczyk of the bureau of internet and technology; and legal assistant charmaine blake — all supervised by bureau of internet and technology chief kim berger and senior enforcement counsel for economic justice kevin wallace. the investor protection bureau is led by chief peter pope. both the bureau of internet and technology and the investor protection bureau are part of the division for economic justice, which is overseen by chief deputy attorney general chris d’angelo and first deputy attorney general jennifer levy.  groups audience:  bureau of internet and technology (bit) investor protection bureau translate this page translation disclaimer previous press releases august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january visit the press release archive search: home contact us employment disclaimer privacy policy accessibility policy employees resources © new york state attorney general. all rights reserved. select a language below / seleccione el idioma abajo disclaimer this google™ translation feature is provided for informational purposes only. the office of attorney general's website is provided in english. however, the "google translate" option may assist you in reading it in other languages. google translate cannot translate all types of documents, and it may not give you an exact translation all the time. anyone relying on information obtained from google translate does so at his or her own risk. the office of attorney general does not make any promises, assurances, or guarantees as to the accuracy of the translations provided. the state of new york, its officers, employees, and/or agents shall not be liable for damages or losses of any kind arising out of, or in connection with, the use or performance of such information, including but not limited to, damages or losses caused by reliance upon the accuracy of any such information, or damages incurred from the viewing, distributing, or copying of such materials. a copy of this disclaimer can also be found on our disclaimer page. close this box or use the [ x ] none max planck vlib news |   max planck vlib news search primary menu skip to content home about contact disclaimer privacy policy search for: sfx link resolver mpg/sfx server maintenance, thursday june, - pm . june eia the mpg/sfx server will undergo scheduled maintenance due to a hardware upgrade. the downtime will start at pm. services are expected to be back after approximately one hour. we apologize for any inconvenience. outage sfx link resolver mpg/sfx server maintenance, tuesday december, - pm . november eia the database of the mpg/sfx server will undergo scheduled maintenance. the downtime will start at pm. services are expected to be back after minutes. we apologize for any inconvenience. outage resources, sfx link resolver how to get elsevier articles after december , . december inga the max planck digital library has been mandated to discontinue their elsevier subscription when the current agreement expires on december , . read more about the background in the full press release. nevertheless, most journal articles published until that date will remain available, due to the rights stipulated in the mpg contracts to date. to fulfill the content needs of max planck researchers when elsevier shuts off access to recent content at the beginning of january, the max planck libraries and mpdl have coordinated the setup of a common document order service. this will be integrated into the mpg/sfx interface and can be addressed as follows: step : search in sciencedirect, start in any other database or enter the article details into the mpg/sfx citation linker. step : click the mpg/sfx button. note: in sciencedirect, it appears in the “get access” section at the top of those article pages for which the full text is no longer available: step : check the options in the service menu presented to you, e.g. free available full text versions (if available). step : to order the article via your local library or the mpdl, select the corresponding link, e.g. "request document via your local library". please note that the wording might differ slightly according to your location. step : add your personal details to the order form in the next screen and submit your document request. the team in your local library or at the mpdl will get back to you as soon as possible. please feel free to contact us if you face any problem or want to raise a question. update, . . : check out our new flyer "how to deal with no subscription deal" prepared in cooperation with max planck’s phdnet. elsevier document-delivery resources aleph multipool-recherche: parallele suche in mpg-bibliothekskatalogen . november inga update, . . : die multipool-suche gibt es jetzt auch als webinterface. der multipool-expertenmodus im aleph katalogisierungs-client dient der schnellen recherche in mehreren datenbanken gleichzeitig. dabei können die datenbanken entweder direkt auf dem aleph-server liegen oder als externe ressourcen über das z . -protokoll angebunden sein. zusätzlich zu den lokalen bibliotheken ist der mpi bibliothekskatalog im gbv auf dem aleph-sever bereits vorkonfiguriert. die multipool-funktion ist im aleph katalogisierungs-client im recherche-bereich zu finden ( . tab): unterhalb des bereichs zur auswahl der relevanten datenbanken kann man die suchanfrage eintragen. hinweise zur verwendeten kommandosprache finden sich in der aleph-hilfe. nach dem absenden der suchanfrage wird die ergebnisliste mit den datenbanken und der jeweiligen treffermenge im unteren rahmen angezeigt: zum Öffnen eines einzelnen sets genügt ein doppelklick: bei gemeinsamen katalogen – wie z.b. dem mpi bibliothekskatalog im gbv – findet sich der hinweis auf die bestandshaltende bibliothek in der datensatz-vollanzeige: zur einrichtung der multipool-suche müssen die vom lokalen aleph-client genutzten konfigurationsdateien (library.ini und searbase.dat) erweitert werden. bei bedarf stellen wir die von uns genutzten dateien gerne zur verfügung. weiterführende informationen finden sich auch im aleph wiki: download und installation des aleph clients einrichtung weiterer z . -zugänge aleph vlib portal goodbye vlib! shutdown after october , . october inga in the max planck virtual library (vlib) was launched, with the idea of making all information resources relevant for max planck users simultaneously searchable under a common user interface. since then, the vlib project partners from the max planck libraries, information retrieval services groups, the gwdg and the mpdl invested much time and effort to integrate various library catalogs, reference databases, full-text collections and other information resources into metalib, a federated search system developed by ex libris. with the rise of large search engines and discovery tools in recent years, usage slowly shifted away and the metasearch technology applied was no longer fulfilling user’s expection. therefore, the termination of most vlib services was announced two years ago and now we are approaching the final shutdown: the vlib portal will cease to operate after the th of october . as you know, there are many alternatives to the former vlib services: mpg.rena will remain available for browsing and discovering electronic resources available to max planck users. in addition, we’ll post some information on how to cross search max planck library catalogs soon. let us take the opportunity to send a big "thank you!" to all vlib users and collaborators within and outside the max planck society. it always was and will continue to be a pleasure to work with and for you. goodbye!… and please feel free to contact us in case of any further question. mpg.ebooks, sfx link resolver https only for mpg/sfx and mpg.ebooks . november eia as of next week, all http requests to the mpg/sfx link resolver will be redirected to a corresponding https request. the max planck society electronic book index is scheduled to be switched to https only access the week after, starting on november , . regular web browser use of the above services should not be affected. please thoroughly test any solutions that integrate these services via their web apis. please consider re-subscribing to mpg.ebooks rss feeds. ebookshttpsrss sfx link resolver https enabled for mpg/sfx . june inga the mpg/sfx link resolver is now alternatively accessible via the https protocol. the secure base url of the productive mpg/sfx instance is: https://sfx.mpg.de/sfx_local. https support enables secure third-party sites to load or to embed content from mpg/sfx without causing mixed content errors. please feel free to update your applications or your links to the mpg/sfx server. https resources citation trails in primo central index (pci) . june inga the may release brought an interesting functionality to the primo central index (pci): the new "citation trail" capability enables pci users to discover relevant materials by providing cited and citing publications for selected article records. at this time the only data source for the citation trail feature is crossref, thus the number of citing articles will be below the "cited by" counts in other sources like scopus and web of science. further information: short video demonstrating the citation trail feature (by ex libris). detailed feature description (by ex libris) pciprimo-central-indexscopusweb-of-science sfx link resolver mpg/sfx server maintenance, wednesday april, - am . april inga the mpg/sfx server updates to a new database (mariadb) on wednesday morning. the downtime will begin at am and is scheduled to last until am. we apologize for any inconvenience. outage resources proquest illustrata databases discontinued . april inga last year, the information provider proquest decided to discontinue its "illustrata technology" and "illustrata natural science" databases. unfortunately, this represents a preliminary end to proquest’s long-year investment into deep indexing content. in a corresponding support article proquest states that there "[…] will be no loss of full text and full text + graphics images because of the removal of deep indexed content". in addition, they announce to "[…] develop an even better way for researchers to discover images, figures, tables, and other relevant visual materials related to their research tasks". the mpg.rena records for proquest illustrata: technology and proquest illustrata: natural science have been marked as "terminating" and will be deactivated soon. proquest posts navigation … next → in short in this blog you'll find updates on information resources, vendor platform and access systems provided by the max planck digital library. use mpg.rena to search and browse through the journal collections, ebook collections and databases available to mpg researchers. new resources in mpg.rena brill scholarly editions . june current digest of the russian press (east view) . june ogonek digital archive (east view) . june f research . june confidential print: middle east, - . june mpdl news   news categories coins ( ) exlibris ( ) localization ( ) materials ( ) mpg.ebooks ( ) mpg.rena ( ) question and answer ( ) resources ( ) sfx link resolver ( ) tools ( ) vlib portal ( ) related blogs fhi library mpis stuttgart library pubman blog proudly powered by wordpress chocolate kiddies european tour - wikipedia chocolate kiddies european tour from wikipedia, the free encyclopedia jump to navigation jump to search chocolate kiddies european tour the chocolate kiddies is a three-act broadway-styled revue that, in its inaugural production – from may to september – toured berlin, hamburg, stockholm, and copenhagen. the show never actually performed on broadway,[ ] but was conceived, assembled, and rehearsed there. chocolate kiddies commissioned new works, but was also an amalgamation and adaptation of several leading african american acts in new york, specifically harlem, intended to showcase exemplary jazz and african american artistry of the harlem renaissance. early jazz was uniquely american; and, while new orleans enjoys popularity for being its birthplace, the jazz emerging from harlem during the renaissance had, on its own merits, captured international intrigue.[ ] contents history . departure . arrival and tour . la revue nègre opening in paris production personnel and cast . production . cast selected songs gallery selected subsequent tours . bibliography notes and references . notes . references history[edit] see also: history of the jews in russia and russian civil war the impetus for producing the chocolate kiddies was partly a culmination or outgrowth of (i) the success of a harlem (and atlantic city) jazz band led by sam wooding ( – ) and a floor show, initially developed for the opening of the nest club and (ii) the success of eubie blake and noble sissle's broadway musical, the chocolate dandies, which, after performances, closed november , ... leaving some of the cast available, from which, the chocolate kiddies picked up choreographer charlie davis and singer lottie gee. the cast included singer adelaide hall, who came from the miller and lyles broadway production runnin' wild, the three eddies, rufus greenlee and thaddeus drayton, bobbie and babe goins, charles davis and sam wooding and his orchestra. leoni leonidoff[ ] (né leonid davydovich leonidoff-bermann; born abt. ) became the owner-producer of the chocolate kiddies tour. he was a russian-jewish exile living in berlin as a theatrical impresario. leonidoff's introduction to wooding was possibly influenced by a russian-jewish-born american impresario living in new york, morris gest ( – )[ ] and his brother and partner, sam gest ( – ), an impresario living in berlin. leonidoff, in , signed wooding to take his band on a european tour, provided that a musical revue was added.[ ] russian-born jewish american impresario arthur seymour lyons[ ] ( – ) staged an adaptation and, for several weeks prior to departure, rehearsed the company at bryant hall.[ ] before settling on the name chocolate kiddies, the show had been billed as the club alabam revue and club alabam fantasies.[note ] duke ellington, with jo trent as lyricist, composed four songs for the production – his first work for a musical revue genre.[ ] departure[edit] after a farewell reception at the bamville club in harlem two days earlier, over theatrical professionals swarmed the white star line pier (either pier or ; current site of chelsea piers) on may , , as wooding, his band, and the revue performers boarded the ss arabic and departed for hamburg.[ ] members of the revue who did not travel aboard the ss arabic included helen miles, willie robbins, arthur robbins, ruth williams, and evelyn dove, who traveled from london.[ ] lottie gee was aboard as lottie kyer – she had been married from to to pianist "peaches" kyer (né wilson harrison kyer; – ).[ ] arrival and tour[edit] the company arrived in hamburg may , , and traveled to berlin, arriving may and opened may at the admiralspalast, where they performed weeks. one of the audience members, -year-old berliner alfred lion, later said, "it was the first time i saw colored musicians and heard the music. i was flabbergasted . . . – it was something brand new, but it registered with me right away." thirteen years later, in , lions co-founded blue note records in new york.[ ] the chocolate kiddies orchestra also did a recording sessions in berlin june – , , at vox records. on july , chocolate kiddies opened in hamburg at the thalia theater for performances, ending august . then stockholm, opening august and closing september . the stockholm performances included a benefit for the swedish red cross, for the brother of the king.[ ] then they performed in copenhagen in the circus building,[ ] opening september , closing september .[ ][ ][ ] la revue nègre opening in paris[edit] hotsy totsy, a tab dance revue backed by the charleston jazz band, led by claude hopkins, renamed la revue nègre, opened in paris october , . the cast included will marion cook and josephine baker. at least one chocolate kiddies cast member, lydia jones, joined the production.[ ] production personnel and cast[edit] production[edit] book and staging: arthur seymour lyons music: joe trent (né joseph hannibal trent; – ), lyrics duke ellington, music "deacon jazz" – prior to the debut of the chocolate kiddies, jo trent and the deacons recorded "deacon jazz" c. november in new york; jo trent (vocals); otto hardwick (c-melody sax); duke ellington (piano); george francis (banjo); sonny greer (drums) – discographer brian rust lists fred guy on banjo; matrix t- - ; jazz panorama jplp "jig walk," charleston "jim dandy" "with you"[ ] orchestration: arthur johnston ( – ) choreographer: charles davis (né charles columbus davis; – ) ‡ [note ][ ] set design and costumes: willy pogany ( – ) publisher: robbins-engel; oclc  cast[edit] sam wooding's orchestra from club alabam sam wooding, piano, leader willie lewis ( – ), clarinet eugene sedric ( – ), clarinet, tenor sax garvin bushell ( – ), clarinet, alto sax, oboe, bassoon tommy ladnier ( – ), trumpet bobby martin ( – ), trumpet maceo elmer edwards ( – ), trumpet herb flemming ( – ), trombone john warren, tuba johnny mitchell, banjo george howe ( – ),[note ] drums huvudroller (swedish) (leading roles): greenlee & drayton rufus greenlee ( – )[note ] thaddeus "teddy" drayton ( – )[ ] the three eddies shakey beasley (né clarence beasley; born abt. ) earle "tiny" ray ( – ) chick horsey ( – )[note ] leading roles (continued) evelyn dove ( – ) margaret sims (maiden – )[note ][ ][ ][ ] bobby and babe goins (acrobatic dancers)[ ][ ][ ][note ] robert goins (né walter robert goins; – ) mary goins (née mary or marie hall; ) leading roles (continued) arthur "strut" payne (né arthur henry payne; – ), baritone adelaide hall ( – )[ ] lottie gee ( – ) charles davis ( – )[note ] george staton (né george franklin staten; – ) willy robbins – in one segment, robbins and chick horsey performed "two happy boys" in blackface. according to recollections of garvin bushell, "they sang a song, then they went 'wah wah, wah wah,' and bobby martin emulated their speech with his trumpet. that was a big hit. they were trying to do johnny hudgins's act."[ ][ ] jessie crawford arabella fields ( – ) lydia jones[note ] helen miles ruth williams prisbelönta dansöser från new yorks största neger teatrar (award winning dance shows from new york's greatest black theaters) allegritta anderson ( – )[note ] viola ("jap") branch pearl brown marie bushell (née marie roberts; – )[note ] thelma green ( – ), wife of rufus greenlee[note ] bernice miles (née bernice m. miles; – ) rita walker ( – )[note ] thelma watkins (maiden; – )[note ] mamie savoy bobbie vincent ( – )[note ][ ][ ] arthur robbins selected songs[edit] from act "night life in a negro cafe in harlem in new york" "deacon jazz," sang by adelaide hall with chorus from act "symphonic concert jazz concert by the sam wooding orchestra of the club alabam, new york" "indian love call" "st. louis blues" "medley of american hits" "shanghai shuffle" – vox (audio) ( recording) "alabamy bound" – vox (audio) ( recording) "o katharina," l. wolfe gilbert ( – ) (words), richard fall ( – ) (music) – vox ; re-released on jazz panorama (vinyl) lp (audio) ( recording) "by the waters of minnetonka," thurlow lieurance ( – ) (w&m) – vox; re-released on jazz oracle (sv) bdw (cd vol. of ) (audio) from act "jim dandy," a strut dance "with you," sung by lottie gee "jig walk," charleston, to which an ensemble danced the charleston gallery[edit] while in berlin, the band, recorded several selections for the berlin-based vox label. sam wooding and his orchestra aka the chocolate kiddies photo taken at the vox phonograph studio — sam wooding and his orchestra; seated, left to right: tommy ladnier (trumpet), john warren (tuba) (behind), sam wooding (piano/leader), willie lewis (reeds), george howe ( – ) (drums). standing, left to right: herb flemming (trombone), eugene sedric (reeds), johnny mitchell (banjo), bobby martin (trumpet), garvin bushell (reeds), maceo elmer edwards ( – ) (trumpet).[ ] not pictured: arthur lange ( – ), arthur johnston ( – ), arrangers selected subsequent tours[edit] – chocolate kiddies russian tour – sam wooding and the chocolate kiddies, with much of the cast performed in argentina in for six months, returning to new york december , , aboard the voltaire (de). – sam wooding and his orchestra, billed as the "chocolate kiddies orchestra," toured spain in , without the chorus and dancers. they performed in san sebastián, madrid, and barcelona. the tour has been chronicled as spain's first live jazz performances by americans. on july , , while in barcelona, the orchestra recorded ten songs for parlophon. eight of the songs were recorded twice, to accommodate different record formats. musicians: bobby martin ( – ) (trumpet, vocals), doc cheatham ( – ) (trumpet, vocals, arranger), albert wynn ( – ) (trombone), billy burns ( – ) (trombone), willie lewis ( – ) (clarinet, alto sax, bari sax, vocals), jerry blake ( – ) (clarinet, alto sax, vocals), gene sedric ( – ) (clarinet, tenor sax, vocals), freddy johnson ( – ) (piano, vocals, arranger), johnny mitchell (banjo, guitar), sumner leslie "king" edwards ( – ) (tuba, bass), ted fields (né edward fields; – ) (drums), sam wooding (director) bibliography[edit] williams, iain cameron underneath a harlem moon: the harlem to paris years of adelaide hall archived - - at the wayback machine. bloomsbury publishers, isbn  - - - . chapter : the chocolate kiddies come to town - is devoted to the chocolate kiddies tour. notes and references[edit] notes[edit] ^ club alabam was a broadway after-theater jazz club set in a rathskeller of the now bygone th street theatre at west th street. the club endured since the theater's inception in , originally known as the "little club," but became known as "club alabam" in and henceforth flourished with jazz. following the prohibition, the bar closed and remained vacant for years. on march , —  months and  days after the united states officially entered world war ii — the american theatre wing opened it back up as the stage door canteen for american and allied servicemen. the property, as of , was owned and managed by the kushner companies. ^ a b charles davis aka c. columbus davis ( – ), when he died, was living in englewood, new jersey, at reis avenue. his big break came as a principal dancer in shuffle along, after which, he rapidly rose to notability as a choreographer at the apollo and lafayette theatres in harlem. davis also did the choreography for the broadway musical, rang tang. he and his wife, cecile ( – ), had two daughters, meta j. davis ( – ) and anna l. davis ( – ) (duke ellington's music for the theatre, by john childs franceschina, mcfarland & company, , p. ; oclc  ) ^ george howe (aka george washington howe; né robert washington howe; – ), drummer, led the house band at the nest club beginning , when teddy hill was in the band. luis russell replaced howe in . howe had been the drummer for sam wooding. howe and fellow musician george e. dyer ( – ), both, in the mid- s, living in glens falls, drowned november , , in the champlain canal, near fort ann, after a car driven by howe submerged in the canal as a result of side-swiping a -ton truck (u.s. class ) towing another -ton truck on comstock road. they were returning from a gig at a nightclub operated by maxie gordon (né maxime godon gordon; – ) – two miles from whitehall. they had left hudson falls at : am and were followed by jimmie gillespie and banjoist percy richardson. trombonist benny morton ( – ), also at the gig, decided at the last minute not to ride with howe and dyer. howe was buried at cypress hills national cemetery, brooklyn. ^ rufus greenlee (né rufus edward greenlee; – ), born in asheville, north carolina, became a native of new haven, connecticut. from about - to about , he was married to cast member thelma greene (see greene's brief bio in notes). in greenlee's post-vaudeville days, beginning in the mid- s, until is death, he owned and operated the monterey cafe, a jazz venue in new haven at – dixwell avenue in the dixwell neighborhood. johnny "hammond" smith's album, black coffee, was recorded there. as a side note, his grand nephew, lou jones ( – ), was an olympic gold medalist in the × metres relay in the summer games. ^ chick horsey (né layburn horsey; – ), born in chester, pennsylvania, died july , , in naples, italy, at the hotel diana. ^ margaret sims (maiden; – ), born in washington, d.c., was a blues singer. in , the new york age stated that she epitomized the complete metamorphosis of the blues singer. she married broadway producer irvin colloden miller ( – ). her sister, edith g. sims ( – ), was the second of three wives of actor jimmy baskett ( – ). ^ philadelphia-born bobby goins (né walter robert goins; – ) and babe (aka "baby") goins (née mary or marie hall; born ) were a husband-and-wife acrobatic dance team who, with productions, toured europe several times. bobby and babe married april , , in manhattan – seventeen days prior to embarking on the european tour. that tour, incidentally, was babe's first break as a performer.      babe, who toured europe three more times, won celebrated acclaim on all tours. according to an article published by the new york age (may , ), babe was born in havana, cuba; tho', some sources give washington, d.c., as her place of birth. at age three babe moved to washington, d.c. bobby and babe divorced around .      bobby – on january , , in manhattan – married again to irene winifred bennett (maiden; – ). babe – on may , , in manhattan – married again to william f. joyce, jr. however, a article in the new york age (may ) stated that babe and joyce were married (in ).      bobby went on to dance as a member of the crackerjacks, a dance team that influenced the next-generation of professional dancers in harlem. bobby became a member of the american guild of variety artists. bobby – on january , , in manhattan – married again to irene winifred bennett (maiden; – ). ^ lydia jones was portrayed in a one-woman off-broadway play, the sensational josephine baker, by cheryl howard (née cheryl gay alley; born ), wife of ron howard ^ allegretta anderson (née alegretta summers; – ) was a chicago-born actress who – on may , , in chicago – married julian kirby anderson ( – ). she later was married to agaton h. magboo. she acted in the film, georgia rose. ^ marie bushell (née marie roberts; – ) was the first of two wives of garvin bushell. they married july , , in manhattan. ^ thelma greene (née thelma m. contee; – ) was, from about to about , married to cast member rufus greenlee. her stage surname was that of her ex-husband, jesse warren greene (born ), who she married around , and with whom she had a daughter, jessica iris greene ( – ), who, in , married edward d. julian austin. thelma, between and , married leonard g. hyman (né leonard grimke hyman; born ), who, from to , had been head of the photography division and official photographer at the tuskegee institute. leonard hyman's predecessors and successors there were distinguished photographers, notably frances benjamin johnston ( – ) and c. m. battey ( – ), who he replaced. p. h. polk ( – ) was, at the time, one of hyman's assistants, and in , succeeded him as head of photography. ^ rita walker (maiden – ) – on april , , in manhattan – married dennis ragland ( – ). ^ thelma watkins (maiden; – ) was born in philadelphia. at some point, she married james adrian mcdaniel (born c. ). the date-of-birth on her birth certificate – november , – differs from the date listed on two ship manifests: ss arabic, which departed new york may , , and arrived in hamburg may , (storyville, vol. , august–september , p. ) ss nieuw amsterdam, which departed boulogne-sur-mer february , , and arrived in new york march , (ancestry.com) ^ bobbie vincent (née bobbye vincson; – ), born in kansas city, missouri, went on to perform with other companies, including performances in buenos aires in with clarence robinson's cotton club revue company, booked by b and b artists bureau of harlem, headed by william b. cohen and b.l. burtt (né bernard lamberson burtt; – ). more than two decades later, she was an official of the x-glamour girls revue, composed of former entertainers from the cotton club. the company performed in october , , at the riviera ballroom in new york at broadway and rd street. one of her older sisters, flash amber vincson ( – ), married – on august , , in chicago – vaudeville entertainer buck washington ( – ) of buck and bubbles. references[edit] ^ a b "jazz as deliverance: the reception and institution of american jazz during the weimar republic," by susan carol cook, phd, american music (peer-reviewed journal published by the university of illinois press), vol. , no. , special jazz issue, spring , pps. - ; oclc  , (article), issn  - (publication) (accessible via jstor at www.jstor.org/stable/ ; subscription required) ^ traveling blues: the life and music of tommy ladnier, by bo lindström (born ) and daniel vernhettes (born ), paris: jazz'edit ( ); oclc  , ^ "berlin colored revue," billboard, may , ^ a b the jazz republic: music, race, and american culture in weimar germany, by jonathan o. wipplinger, university of michigan press ( ), pps. – ; oclc  ^ "colored show scores big in berlin – chocolate kiddies with russian owner, opens at admiral palast," variety, vol. , no. , may , , p. ^ a b "'chocolate kiddies company sails for germany – over professionals swarm white star line pier as s.s. arabic leaves – hundreds of co-theatrical celebrities attend 'farewell' as guest of colored vaudeville comedy club," by floyd g. snelson, jr. (né floyd grant snelson, jr.; – ), pittsburgh courier, may , , p. (accessible via newspapers.com; subscription required) ^ the biographical encyclopedia of jazz, leonard feather (posthumous ed.) and ira gitler (ed.), oxford university press ( ), p. ^ a b "sam wooding and the chocolate kiddies at the thalia-theater in hamburg th july, to th august, ," by berhard h. behncke, storyville, vol. , august–september , p. – (accessible via national jazz archive at link) ^ an encyclopedia of south carolina jazz and blues musicians (re: "william harrison 'peaches' kyer"), by benjamin franklin, university of south carolina press ( ); oclc  ^ voices of the jazz age: profiles of eight vintage jazzmen, by chip deffaa, university of illinois press ( , ), p. ; oclc  ^ a b "the chocolate kiddies in copenhagen," by hans larsen, the record changer, no. , april , pps. – ^ "denne bande frække sorte slubberter – sam wooding i københavn, " (in swedish), by erik wiedemann (de) ( – ), musik und forskning, , københavn , pps. – ; issn  - x ^ "staging the great migration: the chocolate kiddies and the german experience of the new negro renaissance," by paul j. edwards, modernism/modernity (johns hopkins university press ), vol. , cycle , august , ; oclc  ^ "will marion cook and the tab show, with particular emphasis on hotsy totsy and la revue nègre ( )," by peter m. lefferts, university of nebraska-lincoln ( ) ^ chocolate kiddies, coloured revue (sic), presented by arthur s. lyons, robbins-engel; european distributor – berlin: victor alberti of the musikalienhandlung graphisches kabinett (© ); oclc  note: alberti's (born abt. in miskolc, hungary) music shop, until , was in berlin on rankestrasse (de). he was a music publisher and distributor, well-known for distributing jazz sheet music and recordings. by , he was partners with armin l. robinson (de) ( – ) ^ "obituary: charlie davis," new york daily news, september , , p. , col. (accessible via newspapers.com; subscription required) ^ thaddeus drayton collection; – at schomburg center for research in black culture, manuscripts, archives and rare books division at the new york public library oclc  ^ "pleases galleryites," new york age, june , , p. (accessible via newspapers.com; subscription required) ^ "margaret sims dies in new york," pittsburgh courier, march , , p. (accessible via newspapers.com; subscription required) ^ "death notices: goins, walter robert, sr.," new york daily news, december , , p. , col. (accessible via newspapers.com; subscription required) ^ "on the spot – baby goins," by dean glynn, new york age, may , , p. (accessible via newspapers.com; subscription required) ^ jazz dance: the story of american vernacular dance, marshall stearns and jean stearns ( ed.) collier-macmillan ( ); oclc  macmillan ( ); oclc  schirmer ( ); oclc  , ; isbn  - - - - da capo press (paperback) ( ); oclc  ; isbn  - - - - ^ underneath a harlem moon: the harlem to paris years of adelaide hall (chapter : "the chocolate kiddies come to town"), by iain cameron williams, continuum ( ), pps. – & ; oclc  , isbn  - - - - ^ black people: entertainers of african descent in europe and germany, by rainer erich lotz (de) (re: "chocolate kiddies"), birgit lotz verlag ( ), pps. & ^ "louis armstrong, eccentric dance, and the evolution of jazz on the eve of swing," by brian harker, journal of the american musicological society, vol. , no. , spring , p. (of pps. – ) (accessible via jstor at www.jstor.org/stable/ . /jams. . . . ) ^ "blues is my business," by victoria spivey ( – ), record research, robert colton & len kunstadt, eds., issue , october , begins on p. ; issn  - ^ "pretty manicurist" (photo of vincson), topeka plain dealer, vol. , no. , november , , p. (accessible via genealogybank.com at www.genealogybank.com/nbshare/ac ; subscription required) ^ "chocolate kiddies: the show that brought jazz to europe and russia in ," by björn englund (sv) (born ), storyville, december –january , pps. – retrieved from "https://en.wikipedia.org/w/index.php?title=chocolate_kiddies_ _european_tour&oldid= " categories: s in music s-related lists harlem renaissance harlem hidden categories: articles with short description short description matches wikidata webarchive template wayback links ac with elements navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages add links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement mit license - wikipedia mit license from wikipedia, the free encyclopedia jump to navigation jump to search permissive free software license mit license publisher massachusetts institute of technology spdx identifier mit (see list for more)[ ] debian fsg compatible yes[ ] fsf approved yes[ ][ ] osi approved yes[ ] gpl compatible yes[ ][ ] copyleft no[ ][ ] linking from code with a different licence yes the mit license is a permissive free software license originating at the massachusetts institute of technology (mit)[ ] in the late s.[ ] as a permissive license, it puts only very limited restriction on reuse and has, therefore, high license compatibility.[ ][ ] the wikipedia and wikimedia commons projects use the alternative name expat license. the mit license is compatible with many copyleft licenses, such as the gnu general public license (gpl); mit licensed software can be re-licensed as gpl software, and integrated with other gpl software, but not the other way around.[ ] the mit license also permits reuse within proprietary software, provided that either all copies of the licensed software include a copy of the mit license terms and the copyright notice, or the software is re-licensed to remove this requirement. mit-licensed software can also be re-licensed as proprietary software,[ ][ ] which distinguishes it from copyleft software licenses. as of [update], mit was the most popular software license found in one analysis,[ ] continuing from reports in that mit was the most popular software license on github, ahead of any gpl variant and other free and open-source software (foss) licenses.[ ] notable projects that use the mit license include the x window system, ruby on rails, nim, node.js, lua and jquery. notable companies using the mit license include microsoft (.net core), google (angular) and facebook (react). contents license terms variations . x . other variations minor ambiguity and variants comparison to other licenses . bsd . gnu gpl relation to patents origins reception see also references further reading external links license terms[edit] the mit license has the identifier mit in the spdx license list.[ ][ ] it is also known as the "expat license".[ ] it has the following terms:[ ] copyright (c) permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "software"), to deal in the software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software, and to permit persons to whom the software is furnished to do so, subject to the following conditions: the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software. the software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. in no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software. variations[edit] x [edit] the x license, a variation of the mit license, has the identifier x in the spdx license list.[ ][ ] it is also known as the "mit/x consortium license" by the x consortium (for x ).[ ] it has the following terms:[ ] copyright (c) x consortium permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "software"), to deal in the software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software, and to permit persons to whom the software is furnished to do so, subject to the following conditions: the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software. the software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. in no event shall the x consortium be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software. except as contained in this notice, the name of the x consortium shall not be used in advertising or otherwise to promote the sale, use or other dealings in this software without prior written authorization from the x consortium. x window system is a trademark of x consortium, inc. other variations[edit] the spdx license list contains extra mit license variations. examples include:[ ] mit- , a variation with the attribution paragraph removed. mit-advertising, a variation with an additional advertising clause. minor ambiguity and variants[edit] massachusetts institute of technology has been using many licenses for software since its creation, so the phrase "the mit license" is theoretically ambiguous.[ ] for example, mit offers four licensing options for the fftw[ ] c source code library, one of which is the gpl v . and the other three of which are not open-source. "mit license" may refer to the expat license (used for the xml parsing library expat)[ ] or to the x license (also called "mit/x consortium license"; used for x window system by the mit x consortium).[ ] the "mit license" published by the open source initiative[ ] is the same as the "expat license". the x consortium was dissolved late in , and its assets transferred to the open group,[ ] which released x r initially under the same license. the x license[ ] and the x r "mit license" chosen for ncurses by the free software foundation[ ] both include the following clause, absent in the expat license:[ ] except as contained in this notice, the name(s) of the above copyright holders shall not be used in advertising or otherwise to promote the sale, use or other dealings in this software without prior written authorization. as of , the successor to the x window system is the x.org server, which is licensed under what is effectively the common mit license, according to the x.org licensing page: the x.org foundation has chosen the following format of the mit license as the preferred format for code included in the x window system distribution. this is a slight variant of the common mit license form published by the open source initiative the "slight variant" is the addition of the phrase "(including the next paragraph)". some ignore this history, referring to only one mit license, as illustrated by github's licensing advice and the legal text for the mit license at github's service choosealicense.com. comparison to other licenses[edit] bsd[edit] the original bsd license also includes a clause requiring all advertising of the software to display a notice crediting its authors. this "advertising clause" (since disavowed by uc berkeley[ ]) is present in the modified mit license used by xfree . the university of illinois/ncsa open source license combines text from both the mit and bsd licenses; the license grant and disclaimer are taken from the mit license. the isc license contains similarities to both the mit and simplified bsd licenses, the biggest difference being that language deemed unnecessary by the berne convention is omitted.[ ][ ] gnu gpl[edit] the gnu gpl is explicit about the patent grant an author would be giving when the code (or derivative work) is distributed,[ ] the mit license does not discuss patents. moreover, the gpl license impacts "derivative works", but the mit license does not. relation to patents[edit] like the bsd license, the mit license does not include an express patent license although some commentators[ ][ ] state that the grant of rights covers all potential restrictions including patents. both the bsd and the mit licenses were drafted before the patentability of software was generally recognized under us law.[ ] the apache license version . [ ] is a similarly permissive license that includes an explicit contributor's patent license. of specific relevance to us jurisdictions, the mit license uses the terms "sell" and "use" that are also used in defining the rights of a patent holder in title of the united states code section . this has been construed by some commentators[ ][ ] as an unconventional but implicit license in the us to use any underlying patents. origins[edit] one of the originators of the mit license, computer scientist jerry saltzer, has published his recollections of its early development, along with documentary evidence.[ ] see also.[ ] reception[edit] as of [update], according to whitesource software[ ] the mit license was used in % of four million open source packages. as of [update], according to black duck software[ ][better source needed] and a blog[ ] from github, the mit license was the most popular free software license, with the gnu gplv coming second in their sample of repositories. see also[edit] free and open-source software portal comparison of free and open-source software licenses isc license – similar to the mit license, but with language deemed unnecessary removed category:software using the mit license references[edit] ^ a b "spdx license list". spdx.org. spdx working group. ^ "license information". the debian project. software in the public interest (published july , ). – . archived from the original on july , . retrieved july , . ... this page presents the opinion of some debian-legal contributors on how certain licenses follow the debian free software guidelines (dfsg). ... licenses currently found in debian main include: ... expat/mit-style licenses ... ^ a b c d "various licenses and comments about them". the gnu project. free software foundation (published april , ). – . expat license. archived from the original on july , . retrieved july , . ... this is a lax, permissive non-copyleft free software license, compatible with the gnu gpl. it is sometimes ambiguously referred to as the mit license. ... ^ a b c "various licenses and comments about them". the gnu project. free software foundation (published april , ). – . x license. archived from the original on july , . retrieved july , . ... this is a lax permissive non-copyleft free software license, compatible with the gnu gpl. ... this license is sometimes called the mit license, but that term is misleading, since mit has used many licenses for software. ... ^ "licenses by name". open source initiative. n.d. archived from the original on july , . retrieved july , . ... the following licenses have been approved by the osi. ... ... mit license (mit) ... ^ certified copyfree licenses ^ lawrence rosen, open source licensing, p. (prentice hall ptr, st ed. ) ^ a b "the mysterious history of the mit license". opensource.com. opensource.com. retrieved july , . the date? the best single answer is probably . but the complete story is more complicated and even a little mysterious. [...] precursors from . the x consortium or x license variant from . or the expat license from or . ^ hanwell, marcus d. (january , ). "should i use a permissive license? copyleft? or something in the middle?". opensource.com. retrieved may , . permissive licensing simplifies things one reason the business world, and more and more developers [...], favor permissive licenses is in the simplicity of reuse. the license usually only pertains to the source code that is licensed and makes no attempt to infer any conditions upon any other component, and because of this there is no need to define what constitutes a derived work. i have also never seen a license compatibility chart for permissive licenses; it seems that they are all compatible. ^ a b "licence compatibility and interoperability". open-source software - develop, share, and reuse open source software for public administrations. joinup.ec.europa.eu. archived from the original on june , . retrieved may , . the licences for distributing free or open source software (foss) are divided in two families: permissive and copyleft. permissive licences (bsd, mit, x , apache, zope) are generally compatible and interoperable with most other licences, tolerating to merge, combine or improve the covered code and to re-distribute it under many licences (including non-free or 'proprietary'). ^ "various licenses and comments about them". free software foundation. retrieved july , . ^ "paid software includes mit licensed library, does that put my app under mit too?". stackexchange.com. retrieved july , . ^ a b "open source licenses in : trends and predictions". may , . archived from the original on may , . retrieved may , . ^ a b balter, ben (march , ). "open source license usage on github.com". github.com. retrieved november , . mit . %, other . % ^ "mit license". spdx.org. spdx working group. ^ a b "open source initiative osi – the mit license:licensing". open source initiative. retrieved december , . ^ a b c "various licenses and comments about them#expat license". free software foundation. retrieved december , . ^ "mit license explained in plain english - tldrlegal". tldrlegal.com. retrieved july , . ^ "x license". spdx.org. spdx working group. ^ a b c "various licenses and comments about them#x license". free software foundation. retrieved december , . ^ " . . x consortium", . x/mit licenses, the xfree project, march ^ "x license explained in plain english - tldrlegal". tldrlegal.com. retrieved march , . ^ "various licenses and comments about them". gnu project. retrieved july , . ^ "fftw - fastest fourier transform in the west". massachusetts institute of technology. retrieved july , . ^ dickey, thomas e. "copyrights/comments". retrieved october , . ^ dickey, thomas e. "ncurses — frequently asked questions (faq)". ^ "to all licensees, distributors of any version of bsd". university of california, berkeley. july , . retrieved november , . ^ "copyright policy". openbsd. retrieved june , . the isc copyright is functionally equivalent to a two-term bsd copyright with language removed that is made unnecessary by the berne convention. ^ de raadt, theo (march , ). "re: bsd documentation license?". openbsd-misc (mailing list). ^ "patents and gplv - fsfe". fsfe - free software foundation europe. retrieved december , . ^ "why so little love for the patent grant in the mit license?". january , . archived from the original on january , . retrieved january , . ^ "free and open source software and your patents". may , . archived from the original on may , . retrieved may , . ^ stern and allen, open source licensing, p. in understanding the intellectual property license (practicing law institute ) ^ "the mit license, line by line". may , . archived from the original on may , . retrieved may , . ^ christian h. nadan ( ), "closing the loophole: open source licensing & the implied patent license", the computer & internet lawyer, aspen law & business, ( ), by using patent terms like "deal in", "use", and "sell", the mit license grant is more likely to be deemed to include express patent rights than the bsd license. ^ saltzer, jerome h (november , ). "the origin of the "mit license"". ieee annals of the history of computing. ( ): – . doi: . /mahc. . . issn  - . retrieved november , . ^ "top licenses". black duck software. november , . archived from the original on september , . retrieved november , . . mit license %, . gnu general public license (gpl) . % further reading[edit] mitchell, kyle e. (september , ). "the mit license, line by line". /dev/lawyer. archived from the original on september , . retrieved september , . external links[edit] mit license variants the mit license template (open source initiative official site) expat license x license v t e free and open-source software general alternative terms for free software comparison of open-source and closed-source software comparison of source-code-hosting facilities free software free software project directories gratis versus libre long-term support open-source software open-source software development outline timeline software packages audio bioinformatics codecs configuration management drivers graphics wireless geophysics health mathematics office suites operating systems programming languages routing television video games web applications e-commerce android apps ios apps commercial trademarked formerly proprietary formerly open-source community free software movement history open-source-software movement organizations events licenses afl apache apsl artistic beerware bsd creative commons cddl epl free software foundation gnu gpl gnu lgpl isc mit mpl python python software foundation license shared source initiative sleepycat unlicense wtfpl zlib types and standards comparison of licenses contributor license agreement copyleft debian free software guidelines definition of free cultural works free license the free software definition the open source definition open-source license permissive software license public domain viral license challenges digital rights management hardware restrictions license proliferation mozilla software rebranding proprietary device drivers proprietary firmware proprietary software sco/linux controversies software patents software security trusted computing related topics forking gnu manifesto microsoft open specification promise open-core model open-source hardware shared source initiative source-available software the cathedral and the bazaar revolution os portal category retrieved from "https://en.wikipedia.org/w/index.php?title=mit_license&oldid= " categories: free and open-source software licenses permissive software licenses massachusetts institute of technology software x window system massachusetts institute of technology hidden categories: articles with short description short description is different from wikidata use american english from march all wikipedia articles written in american english use mdy dates from march articles containing potentially dated statements from all articles containing potentially dated statements articles containing potentially dated statements from all articles lacking reliable references articles lacking reliable references from november navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages العربية bân-lâm-gú Български català Čeština dansk deutsch español esperanto فارسی français galego 한국어 bahasa indonesia italiano עברית Кыргызча latviešu മലയാളം bahasa melayu nederlands 日本語 norsk bokmål polski português Русский shqip simple english slovenčina slovenščina Српски / srpski suomi svenska ไทย türkçe Українська tiếng việt 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement file:altair.jpg - wikimedia commons file:altair.jpg from wikimedia commons, the free media repository jump to navigation jump to search file file history file usage on commons file usage on other wikis metadata no higher resolution available. altair.jpg ‎( × pixels, file size: kb, mime type: image/jpeg) file information structured data captions englishadd a one-line explanation of what this file represents russianЗвезда Альтаир captions summary[edit] descriptionaltair.jpg english: the star altair source http://photojournal.jpl.nasa.gov/catalog/pia author nasa/jpl/caltech/steve golden this image or video was catalogued by jet propulsion laboratory of the united states national aeronautics and space administration (nasa) under photo id: pia . this tag does not indicate the copyright status of the attached work. a normal copyright tag is still required. see commons:licensing. العربيَّة | беларуская (тарашкевіца)‎ | български | català | čeština | deutsch | english | español | فارسی | français | galego | magyar | հայերեն | bahasa indonesia | italiano | 日本語 | македонски | മലയാളം | nederlands | polski | português | русский | sicilianu | türkçe | українська | 中文 | 中文(简体)‎ | +/− licensing[edit] public domainpublic domainfalsefalse this file is in the public domain in the united states because it was solely created by nasa. nasa copyright policy states that "nasa material is not protected by copyright unless noted". (see template:pd-usgov, nasa copyright policy page or jpl image use policy.) warnings: use of nasa logos, insignia and emblems is restricted per u.s. law cfr . the nasa website hosts a large number of images from the soviet/russian space agency, and other non-american space agencies. these are not necessarily in the public domain. materials based on hubble space telescope data may be copyrighted if they are not explicitly produced by the stsci.[ ] see also {{pd-hubble}} and {{cc-hubble}}. the soho (esa & nasa) joint project implies that all materials created by its probe are copyrighted and require permission for commercial non-educational use. [ ] images featured on the astronomy picture of the day (apod) web site may be copyrighted. [ ] the national space science data center (nssdc) site has been known to host copyrighted content. its photo gallery faq states that all of the images in the photo gallery are in the public domain "unless otherwise noted." original upload log[edit] date/time username resolution size edit summary : , november user:makary × kb (altair_pia .jpg credit: nasa/jpl/caltech/steve golden source:http://photojournal.jpl.nasa.gov/catalog/pia ) file history click on a date/time to view the file as it appeared at that time. date/time thumbnail dimensions user comment current : , march × ( kb) betacommandbot (talk | contribs) move approved by: user:lerk this image was moved from image:pia .jpg == summary == altair == opis == altair_pia .jpg credit: nasa/jpl/caltech/steve golden source:http://photojournal.jpl.nasa.gov/catalog/pia == licensing == {{pd-us you cannot overwrite this file. file usage on commons there are no pages that use this file. file usage on other wikis the following other wikis use this file: usage on ar.wikipedia.org النسر الطائر (نجم) usage on ast.wikipedia.org altair usage on be-tarask.wikipedia.org Альтаір Шаблён:Зорка Шаблён:Зорка/Дакумэнтацыя usage on bg.wikipedia.org Алтаир usage on ca.wikipedia.org altair usage on de.wikipedia.org altair usage on en.wikipedia.org altair user:mitternacht user:invaderxan user:radicalone/ubx design/altair user:invaderxan/trash user:miniwikiuser user:altairpayne user:boyangcheng user:commander v user:crab rangoons user:bhootrina user:bobbylon wikipedia:userboxes/science/astronomy usage on eo.wikipedia.org altairo usage on et.wikipedia.org altair usage on fa.wikipedia.org کرکس پرنده usage on fi.wikipedia.org altair usage on fr.wikipedia.org altaïr usage on ga.wikipedia.org altair usage on gl.wikipedia.org alpha aquilae usage on id.wikipedia.org altair usage on incubator.wikimedia.org wt/mnc/ᡳᡤᡝᡵᡳ ᡠᠰᡳᡥᠠ wp/ckt/Пэгытти usage on it.wikipedia.org altair usage on ja.wikipedia.org アルタイル usage on ko.wikipedia.org 독수리자리 알타이르 usage on la.wikipedia.org altair usage on lt.wikipedia.org altayras usage on mk.wikipedia.org Алтаир usage on ml.wikipedia.org ആൾട്ടയർ usage on nn.wikipedia.org altair usage on no.wikipedia.org altair usage on oc.wikipedia.org agla (constellacion) usage on pl.wikipedia.org altair (gwiazda) usage on pl.wiktionary.org النسر الطائر usage on pt.wikipedia.org altair (estrela) usage on ro.wikipedia.org altair usage on ru.wikipedia.org Альтаир usage on ru.wikiquote.org Альтаир usage on ru.wiktionary.org ҫӑлтӑр жұлдыз view more global usage of this file. metadata this file contains additional information such as exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. if the file has been modified from its original state, some details such as the timestamp may not fully reflect those of the original file. the timestamp is only as accurate as the clock in the camera, and it may be completely wrong. _error structured data items portrayed in this file depicts retrieved from "https://commons.wikimedia.org/w/index.php?title=file:altair.jpg&oldid= " category: altair hidden categories: files from nasa with known ids pd nasa navigation menu personal tools english not logged in talk contributions create account log in namespaces file discussion variants views view edit history more search navigate main page welcome community portal village pump help center participate upload file recent changes latest files random file contact us tools what links here related changes special pages permanent link page information concept uri cite this page print/export download as pdf printable version this page was last edited on july , at : . files are available under licenses specified on their description page. all structured data from the file and property namespaces is available under the creative commons cc license; all unstructured text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and the privacy policy. privacy policy about wikimedia commons disclaimers mobile view developers statistics cookie statement platform cooperative - wikipedia platform cooperative from wikipedia, the free encyclopedia jump to navigation jump to search business structure type a platform cooperative, or platform co-op, is a cooperatively owned, democratically governed business that establishes a computing platform, and uses a website, mobile app or a protocol to facilitate the sale of goods and services. platform cooperatives are an alternative to venture capital-funded platforms insofar as they are owned and governed by those who depend on them most—workers, users, and other relevant stakeholders. contents typology examples platform co-operativism . history of the term . . roots in criticism of the sharing economy public policy . spain . . barcelona . united states . united kingdom advocacy . organizations . campaigns criticisms of the viability of platform cooperatives . dominance of established players . difficulty securing early-stage capital see also references typology[edit] while there is no commonly accepted typology of platform cooperatives, researchers often ontologize platform cooperatives by industry. some potential categories include: transportation, on-demand labor, journalism, music, creative projects, timebank, film, home health care, photography, data cooperatives, marketplaces.[ ] other typologies differentiate platform cooperatives by their governance or ownership structures. platform cooperatives have been contrasted with platform capitalism. companies that try to focus on fairness and sharing, instead of just profit motive, are described as cooperatives, whereas more traditional and common companies that focus solely on profit, like airbnb and uber, are platform capitalists (or cooperativist platforms vs capitalist platforms). in turn, projects like wikipedia, which rely on unpaid labor of volunteers, can be classified as commons-based peer-production initiatives.[ ]: , examples[edit] many platform co-operatives use business models similar to better-known apps or web services, but with a cooperative structure. for example, there are numerous driver-owned taxi apps that allow customers to submit trip requests and notify the nearest driver, similar to uber.[ ][ ] the internet of ownership website includes a directory of the platform co-op "ecosystem".[ ] eva [ ] is a ride-sharing application that offers a service similar to uber, but in line with its cooperative members priorities: cheaper for rider members and better wages for driver members.[ ] fairbnb.coop [ ] is an online marketplace and hospitality service for people to lease or rent short-term lodging. foremost, it is also a community of activists, coders, researchers and designers working to create the platform to enable hosts and guests to connect for travel and cultural exchange, while minimizing the cost to communities. it is an alternative to commercial platforms.[ ] fairmondo is an online marketplace for ethical goods and services, that originated from germany and has expanded to the uk. joining as a stakeholder is open for all and the minimum share is limited to an affordable amount, with stakeholders exercising a democratic control through one-member-one-vote principle.[ ] it is a cooperative alternative to amazon and ebay.[ ] green taxi cooperative is the largest taxi company in the denver metro area.[ ] organized by the communications workers of america local , its members buy into the cooperative for a one-time membership fee of $ and then pay fees amounting to a "fraction" of what large companies charge drivers.[ ] despite having a mobile application through which riders can schedule pickups, and thus competing directly with ride-hailing applications like uber and lyft, as of november the green taxi cooperative reportedly held % market share in denver.[ ] meet.coop is an open source meeting and conferencing tool.[ ] midata is a cooperatively owned, zurich-based, online platform that seeks to serve as an exchange for members' medical data. using an open-source application, members are able to securely share their medical data with doctors, friends, and researchers, and are provided access to "data analysis, visualization and interpretation tools". members can also consent to their data's use in medical research and clinical trials. in a pilot project, post-bariatric-surgery patients are able to upload data to the platform, including their weight and daily step count, and follow their own post-surgery progress.[ ] savvy cooperative is a multi-stakeholder patient-owned research insights cooperative that seeks to match patients with patient engagement leaders, digital health companies, and clinical innovation leads, enabling industry and start-up tech companies to easily conduct user research with patients to ensure the products that go-to-market are patient-centric and focused on patient needs. using savvy's platform, patients can find and apply for gigs that match their conditions, get reimbursed for their participation, and qualify for dividends based on their co-op participation. savvy is majority patient-owned.[ ] stocksy united is a platform cooperative headquartered in victoria, british columbia. it is a "highly curated collection of royalty-free stock photography and video footage that is 'beautiful, distinctive, and highly usable.'"[ ] in , stocksy earned $ . million in sales—doubling its revenues from the year prior—and paid a dividend of $ , to its members.[ ] up & go is a digital marketplace for professional home services that allows users to schedule services such as house cleaning, dog walking, and handywork with worker-owned businesses that have fair work practices.[ ][ ] resonate [ ] is music streaming coop [ ] similar to spotify. collective tools is a cooperatively owned cloud service that offers storage, communication and canvas boards to organisations as well as storage and email to private persons.[ ] platform co-operativism[edit] platform cooperativism is an intellectual framework and movement which advocates for the global development of platform cooperatives. its advocates object to the techno-solutionist claim that technology is, by default, the answer to all social problems.[ ][ ][ ] rather, proponents of the movement claim that ethical commitments such as the building of the global commons, support of inventive unions, and promotion of ecological and social sustainability as well as social justice, are necessary to shape an equitable and fair social economy.[ ] platform cooperativism advocates for the coexistence of cooperatively owned business models and traditional, extractive models with the goal of a more diversified digital labor landscape respecting fair working conditions.[ ] platform cooperativism draws upon other attempts at digital disintermediation, including the peer-to-peer production movement, led by michel bauwens, vasilis kostakis and the p p foundation,[ ] which advocates for "new kinds of democratic and economic participation"[ ] that rest "upon the free participation of equal partners, engaged in the production of common resources", as well as the radically distributed, non-market mechanisms of networked peer-production promoted by yochai benkler.[ ] marjorie kelly's book owning our future contributed the distinction between democratic and extractive ownership design to this discussion.[ ] while platform cooperatives are structured as cooperatives, granting democratic control to workers, customers, users, or other key stakeholders, companies and initiatives that support the ecosystem of the cooperative platform economy are considered a part of the platform cooperativism movement insofar as they attempt to encourage, develop, and sustain its development. it has also been argued that, as the spread of platform cooperativism "will require a different kind of ecosystem -- with appropriate forms of finance, law, policy, and culture -- to support the development of democratic online enterprises", any person or business associated with the development of this ecosystem can be considered a proponent of platform cooperativism.[ ] history of the term[edit] the term "platform cooperativism" was coined by new school professor trebor scholz in a article titled, "platform cooperativism vs. the sharing economy", in which he criticized popular sharing economy platforms and called for the creation of democratically controlled cooperative alternatives that "allow workers to exchange their labor without the manipulation of the middleman".[ ] shortly thereafter, journalist nathan schneider published an article, "owning is the new sharing", which documented a variety of projects using cooperative models for digitally mediated commerce, as well as online, distributed funding-models which hoped to replace the venture capital model predominant in the technology sector.[ ] both scholz and schneider would later credit the work and provocations of other researchers and digital-labor advocates as their inspiration, including, among others, lawyer janelle orsi of the sustainable economies law center, who had "called on technology companies in the sharing economy to share ownership and profits with their users", and amazon mechanical turk organizer kristy milland who had proposed a worker-owned alternative to the platform at the "digital labor: sweatshops, picket lines, barricades" conference in november .[ ][ ] there are several other precursors to platform cooperativism. in , the italian cooperative federation legacoop promulgated a manifesto on the "cooperative commons", which called for bringing the lessons of the cooperative movement to control over online data.[ ] the same year mayo fuster morell published an article named "horizons of digital commons" in which she pointed to the evolution of commons-based peer production merging with cooperatives and the social economy.[ ] the article reflects on an event named building digital commons, which took place in october . it was the goal of the event to further connect the cooperative tradition and collaborative production. other previous similar terms on new forms of cooperativism such as "open cooperativism"[ ] and also studies of how the digital environment opens up new possibilities for the cooperative tradition[ ] are of relevance to the new term platform cooperativism. in , scholz published a primer on platform cooperativism, "platform cooperativism: challenging the corporate sharing economy", which was published in five languages[ ] and helped to internationalize the concept.[ ] in , he published uberworked and underpaid: how workers are disrupting the digital economy,[ ] which further developed the concept. together, scholz and schneider went on to convene an event on the subject, "platform cooperativism. the internet. ownership. democracy", at the new school in november ,[ ] and edit a book, ours to hack and to own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet.[ ] roots in criticism of the sharing economy[edit] proponents of platform cooperativism claim that, by ensuring the financial and social value of a platform circulate among these participants, platform cooperatives will bring about a more equitable and fair digitally mediated economy in contrast with the extractive models of corporate intermediaries. the concept of platform cooperativism emerged from the discourse surrounding digital labor, popular in the late s and early s, which critiqued the use of digitally mediated labor markets to evade traditional labor protections.[ ] early studies of digital labor, using the theories of italian workerists, focused on the "free" or "immaterial" labor performed by users of web . platforms (sometimes referred to as "playbor"), while later analyses served to critique the "crowd fleecing"[ ][ ] of digital laborers by microtask labor-brokerages such as amazon mechanical turk and crowdflower.[ ] in , the digital labor discourse shifted to the so-called "sharing economy", resulting in an increase in both academic and media attention to the practices and policies of online markets for labor, services, and goods.[ ] researchers and labor advocates argued that platforms such as uber and taskrabbit were unfairly classifying full-time workers as independent contractors rather than employees, thus avoiding legally granted labor protections such as minimum-wage laws[ ][ ][ ] and the right to join a union with which to engage in collective bargaining,[ ] as well as different benefits offered to workers with employee status, including time off, unemployment insurance, and healthcare.[ ] other research focused on the automated management of the digital workplace by algorithms, without worker recourse. for example, the wage-per-mile by drivers on the uber platform is controlled moment-to-moment by a surge pricing algorithm,[ ][ ] and its drivers can lose their jobs if they fall behind any one of a number of metrics logged by the platform including ride-acceptance percentage (minimum %) and customer-rating ( . out of ).[ ] sharing economy workers who complained about this algorithmic management were often ignored, (e.g. a taskrabbit discussion forum was shut down in response to worker unrest)[ ] and sometimes told that, insofar as the platform owners do not employ their contracted labor force, they were not actually being managed by the technology companies behind the platforms on which they worked.[ ][ ] insofar as platform cooperatives offer worker-owners a more robust degree of control of the platforms they use, the model was seen as offering an ethical alternative to existing sharing economy platforms.[ ] as these early critiques of the sharing economy remain relevant, platform cooperatives tend to highlight their efforts to provide their worker-owners with a living wage or a fair share of revenues, benefits, control over the platform's design, and democratic influence over the management of the cooperative business. public policy[edit] the platform cooperativism movement has seen a number of global policy proposals and successes. spain[edit] barcelona[edit] barcelona has a long tradition of connecting cooperativism and collaborative production.[ ] as long ago as of october , an event was held to "promote dialogue between the cooperative tradition and digital commons".[ ] the social economy and consumption commission of barcelona city council in started a program on platform cooperativism.[ ] the program includes the provision of match-funding to support entrepreneurship and "la communificadora", an entrepreneurship training and support course, among others.[ ] a march international event by barcola (node about collaborative economy and commons-based peer production in barcelona) produced a set of policy proposals for european governments. integrated as concrete actions for the municipal action plan of the barcelona city council, following a consultative online participatory process, as well as aiming at other local authorities in spain and the government of catalonia, the resulting document criticized the organizational rationale of "multinational corporations based in silicon valley" which, though similar to collaborative-commons economic models, "behave in the style of the prevailing globalized capitalist economic model, based on extracting profits through networked collaboration". that joint statement of public policies for the collaborative economy, which integrates a commons-oriented vision in such an emerging paradigm, claimed that by privatizing certain aspects of the collaborative commons model these companies created "severe inequalities and loss of rights". the organization and participants at the event proposed the creation of favorable regulations for truly collaborative economic models, with measures like the funding of an incubator of new projects in the collaborative economy, including platform cooperativism, as well as the reassigning of public spaces for jointly managed working and manufacturing spaces. embedded in a wider framework of action research for the co-design of public policies, some of these policy proposals have been met with support by members of the barcelona city government. outputs from that process have resulted in specific measures like the incubation of new collaborative economy initiatives following a cooperativist model, or the possibility of new funding schemes for civic projects via transparent "match-funding". united states[edit] nyc council member brad lander of brooklyn's th district, founding co-chair of the council's progressive caucus, released a report in entitled, "raising the floor for workers in the gig economy: tools for nyc & beyond",[ ] which analyzes the contingent work sector in new york city and "presents policy tools for cities seeking to protect gig workers from wage theft and discrimination, provide access to portable benefits, and establish new frameworks for worker organizing".[ ] under his leadership, the nyc council unanimously passed the "freelance isn't free act", which provides freelance workers with a right to full and timely payment, along with new tools for enforcement, and amendments to the nyc human rights law to clarify that employment protections apply to independent and contingent workers.[ ] in his report, nyc council member brad lander presented platform cooperativism as a model to help laborers in the digital economy.[ ][ ] the us department of agriculture appeared to offer its support for the platform cooperativism movement with a feature story in the september/october issue of its magazine, rural cooperatives.[ ] "rural americans have been organizing cooperatives to develop countervailing economic power against larger investor-owned corporations for more than a century. this cooperative movement has now moved into the sharing economy that has been developing throughout the country. wherever investor-owners of software platforms are satisfying the needs of rural asset owners and users, the sharing economy will be welcomed. however, when the need arises, cooperatively-owned software platforms are proving to be a viable alternative." united kingdom[edit] in , jeremy corbyn, leader of the labour party and the opposition in the united kingdom, released a digital democracy manifesto calling for, among other policies, the fostering of "the cooperative ownership of digital platforms for distributing labour and selling services". he proposed that the national investment bank, as well as regional banks, would "finance social enterprises whose websites and apps are designed to minimise the costs of connecting producers with consumers in the transport, accommodation, cultural, catering and other important sectors of the british economy".[ ] advocacy[edit] organizations[edit] platform cooperativism consortium (pcc) the platform cooperativism consortium is a "think-and-do tank"[ ] for the platform cooperativism movement based at the new school in new york city.[ ] as a "global network of researchers, platform co-ops, independent software developers, artists, designers, lawyers, activists, publishing outlets, and funders",[ ] it engages in research, advocacy, education, and technology-based projects. it was launched in november at the occasion of the "building the cooperative internet" conference.[ ] the internet of ownership the internet of ownership is a website which maintains a global directory of platform cooperatives[ ] and a calendar of events[ ] concerning the platform cooperativism movement. it is maintained by nathan schneider and devin balkind.[ ] campaigns[edit] in september , nathan schneider wrote the article "here's my plan to save twitter: let's buy it" [ ] in which he asked "what if users were to band together and buy twitter for themselves?" once in users' hands, schneider suggested, twitter could be turned into a platform co-op. criticisms of the viability of platform cooperatives[edit] dominance of established players[edit] some critics of platform cooperativism claim that platform cooperatives will have trouble challenging established, venture-capital-funded platforms. nick srnicek writes that, due to "the monopolistic nature of platforms, the dominance of network effects, and the vast resources behind these companies … even if all that software would be made open-source, a platform like facebook would still have the weight of its existing data, network effects, and financial resources to fight off any co-op arrival."[ ] rufus pollock expresses similar concerns that platform coops will face major challenges reaching adequate scale, particularly given their inability to raise traditional equity capital.[ ] in addition, he argues that coops often have slow and inefficient decision-making processes which will hamper them in their ability to compete successfully. finally, he points out there is the risk that platform coops will go "bad" becoming an exclusive club for their members (for example, a ride-sharing coop might end up controlled only by drivers who then exploit consumers). evgeny morozov writes that "efforts at platform cooperativism are worthwhile; occasionally, they do produce impressive and ethical local projects. there is no reason why a cooperative of drivers in a small town cannot build an app to help them beat uber locally. but there is also no good reason to believe that this local cooperative can actually build a self-driving car: this requires massive investment and a dedicated infrastructure to harvest and analyze all of the data. one can, of course, also create data ownership cooperatives but it's unlikely they will scale to a point of competing with google or amazon."[ ] while this may be true in certain sectors, arun sundararajan claims that, "economic theory suggests that worker cooperatives are more efficient than shareholder corporations when there isn't a great deal of diversity in the levels of contribution across workers, when the level of external competition is low, and when there isn't the need for frequent investments in response to technological change." using uber as an example of a dominant platform, he continues: "cab drivers, after all, offer a more or less uniform service in an industry with a limited amount of competition. once the technology associated with 'e-hail' is commoditized, the potential for a worker cooperative appears to be in place, since each local market is contestable."[ ] regardless, the possibility of dominant platforms turning the flows of data they receive from their larger user-bases into market-securing technological innovations remains a challenge. for example, uber seeks to use the data they currently collect from drivers using their app to automate the taxi industry, thus eliminating the need for their workforce altogether and likely dropping the value of a ride below that on which a human laborer can survive. difficulty securing early-stage capital[edit] though sundararajan believes there are markets in which platform cooperatives might thrive, he finds their primary barrier to entry to be the initial securing of funds, especially given their ideological devaluation of the need to generate profits for investor-stakeholders. he does note, however, that a number of alternative fundraising models may pave the way for the widespread market-entry of platform cooperatives. among those he mentions is fairshare, a stakeholder model that differentiates between founders, workers, users, and investors, each with distinct voting rights, payouts, and permissions to trade shares on the open market. other models he mentions include crypto-coin crowdfunding, philanthropic investment, and "provider stock ownership programs" that mimic the traditional joint-ownership form of the "employee stock ownership program".[ ] see also[edit] commons-based peer production cooperative sharing economy solidarity economy unionized cooperative workers' self-management references[edit] ^ scholz, trebor ( december ). "the prospects of platform cooperativism". slideshare. retrieved december . ^ dariusz jemielniak; aleksandra przegalinska ( february ). collaborative society. mit press. isbn  - - - - . ^ "profile of people's ride: a co-operative, driver-owned alternative to uber". august . retrieved november . ^ schneider, nathan. "denver taxi drivers are turning uber's disruption on its head". the nation. retrieved november . ^ "the internet of ownership: directory". ^ "eva". retrieved march . ^ "eva". eva.coop. retrieved - - . ^ "fairbnb.coop". retrieved march . ^ "a smart and fair solution for community-powered tourism". fairbnb.coop. retrieved - - . ^ "welcome to fairmondo". fairmondo. retrieved december . ^ scholz, trebor ( ). platform cooperativism: challenging the sharing economy (pdf). new york: rosa luxemburg stiftung. ^ schneider, nathan ( september ). "denver taxi drivers are turning uber's disruption on its head". the nation. retrieved december . ^ kenny, andrew ( december ). "a third of denver's taxi drivers have joined green taxi cooperative to fight uber". denverite. retrieved december . ^ "archived copy". green taxi co-op. archived from the original on february . retrieved december .cs maint: archived copy as title (link) ^ "meet coop". ^ midata https://www.midata.coop/. retrieved december . missing or empty |title= (help) ^ "this co-op wants to put money back into patients' hands". techcrunch. retrieved - - . ^ a b pontefract, dan ( october ). "platform cooperatives like stocksy have a purpose uber and airbnb never will". forbes. retrieved december . ^ up & go http://www.upandgo.coop/. retrieved december . missing or empty |title= (help) ^ "archived copy". archived from the original on october . retrieved december .cs maint: archived copy as title (link) ^ "resonate". retrieved march . ^ "fairshares association: the association for multi-stakeholder co-operation in member-owned social enterprises". retrieved march . ^ "collective.tools | everything you need to get organised – in one place". collective.tools. retrieved - - . ^ scholz, trebor ( april ). "think outside the boss: cooperative alternatives to the sharing economy". public seminar. retrieved december . ^ schneider, nathan ( december ). "an internet of ownership: democratic design for the online economy". the internet of ownership. archived from the original on december . retrieved december . ^ o'dwyer, rachel ( ). scholz, trebor; schneider, nathan (eds.). ours to hack and own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet. new york: or books. p.  . ^ "mission". platform cooperativism consortium. retrieved december . ^ scholz, trebor ( ). uberworked and underpaid: how workers are disrupting the digital economy. new york city: polity. part ii. ^ "p p foundation". p p foundation. retrieved december . ^ "our story". p p foundation. archived from the original on february . retrieved december . ^ a b scholz, trebor ( december ). "platform cooperativism vs. the sharing economy". medium. retrieved december . ^ kelly, marjorie ( ). owning our future. berrett-koehler publishers. isbn  - . ^ schneider, nathan; scholz, trebor ( ). "introduction". in scholz, trebor; schneider, nathan (eds.). ours to hack and own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet. new york: or books. ^ schneider, nathan ( december ). "owning is the new sharing". shareable. retrieved december . ^ schneider, nathan ( ). "the meanings of words". in scholz, trebor; schneider, nathan (eds.). ours to hack and own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet. new york: or books. ^ sifry, micah l. ( october ). "a conversation with trebor scholz on the rise of platform cooperativism". civic hall. retrieved december . ^ "digital commons: manifesto". legacoop. . archived from the original on - - . retrieved - - . ^ fuster morell, mayo (september ). "horizontes del procomún digital" (pdf). archived from the original (pdf) on april . retrieved april . ^ bauwens, m., kostakis, v. ( ). "from the communism of capital to capital for the commons: towards an open co-operativism". triplec. ( ): – . doi: . /triplec.v i . . ^ de peuter & dyer-witheford, g. ( ). "commons and cooperatives". affinities: a journal of radical theory, culture, and action. ( ). ^ "the platform cooperativism primer". platform.coop. archived from the original on march . retrieved december . ^ "contributors". platform cooperativism consortium. retrieved december . ^ scholz, trebor ( ). uberworked and underpaid: how workers are disrupting the digital economy. new york city: polity. ^ "platform cooperativism. the internet. ownership. democracy". platform.coop. retrieved december .[permanent dead link] ^ scholz, trebor; schneider, nathan, eds. ( ). ours to hack and own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet. new york: or books. ^ sifry, micah l. ( october ). "a conversation with trebor scholz on the rise of platform cooperativism". civic hall. retrieved december . ^ scholz, trebor ( ). "chapter ". uberworked and underpaid: how workers are disrupting the digital economy. new york city: polity. ^ terranova, tiziana ( ). network culture: politics for the information age. pluto press. isbn  - - - . ^ scholz, trebor ( ). "how platform cooperativism can unleash the network". in scholz, trebor; schneider, nathan (eds.). ours to hack and own: the rise of platform cooperativism, a new vision for the future of work and a fairer internet. new york: or books. p.  . ^ scholz, trebor ( december ). "platform cooperativism vs. the sharing economy". medium. retrieved december . ^ cheng, denise (october ). is sharing really caring? a nuanced introduction to the peer economy (pdf). ^ "sharing economy . : can innovation and regulation work together?". knowledge@wharton. november . retrieved december . ^ bauwens, michel ( october ). "how uber drivers, making less than the minimum wage, are organizing with assistance of taxi drivers". p p foundation. retrieved december . ^ smith, rebecca; leberstein, sarah (september ). rights on demand: ensuring workplace standards and worker security in the on-demand economy (pdf). national employment law project. p.  . archived from the original (pdf) on - - . retrieved - - . ^ smith, rebecca; leberstein, sarah (september ). rights on demand: ensuring workplace standards and worker security in the on-demand economy (pdf). national employment law project. p.  . archived from the original (pdf) on - - . retrieved - - . ^ rosenblat, alex; stark, luke ( october ). uber's drivers: information asymmetries and control in dynamic work (pdf). data & society. p.  .[permanent dead link] ^ a b slee, tom ( november ). "why canada should de-activate uber". tom slee. retrieved december . ^ weber, harrison ( july ). "taskrabbit users revolt as the company shuts down its bidding system". venture beat. retrieved december . ^ biddle, sam ( july ). "if taskrabbit is the future of employment, the employed are fucked". gawker. retrieved december . ^ claburn, thomas ( august ). "uber drivers under algorithmic management: study". information week. retrieved december . ^ scholz, trebor ( ). platform cooperativism: challenging the sharing economy (pdf). new york: rosa luxemburg stiftung. ^ fuster, mayo ( ). "horizontes de procomún digital". barcelona: caritas. ^ "digital commons event, october , barcelona". archived from the original on - - . ^ fuster, mayo. "presentation barcelona collaborative economy action plan - platform cooperativism action". archived from the original on - - . retrieved - - . ^ barcelona collaborative economy action plan. http://emprenedoria.barcelonactiva.cat/emprenedoria/es/edit.do?codiidioma= &id= &id_activitat_mestre= &dia= &mes= &any= . missing or empty |title= (help) ^ a b "raising the floor for workers in the gig economy: tools for nyc & beyond". brad lander. september . archived from the original on march . retrieved december . ^ a b "brad lander". platform cooperativism. archived from the original on december . retrieved december . ^ lee, adaline ( november ). "hack the union talks the passage of freelance isn't free". freelancer's union. retrieved december . ^ borst, alan (october ). "'platform co-ops' gaining traction" (pdf). rural cooperatives magazine. washington d.c.: us department of agriculture. ^ "digital democracy manifesto". jeremy corbyn. archived from the original on august . retrieved december . ^ " areas of activities". platform cooperativism consortium. retrieved december . ^ platform cooperativism consortium http://platformcoop.newschool.edu/. retrieved december . missing or empty |title= (help) ^ sharp, darren ( november ). "international consortium launched at second platform cooperativism conference". shareable. retrieved december . ^ sharp, darren ( november ). "contributors - platform cooperativism consortium". shareable. retrieved december . ^ a b "directory". the internet of ownership. archived from the original on november . retrieved december . ^ "events". the internet of ownership. archived from the original on october . retrieved december . ^ schneider, nathan ( september ). "here's my plan to save twitter: let's buy it". the guardian. retrieved december . ^ srnicek, nick ( ). platform capitalism. new york city: polity. p.  . ^ information coops: collective funding of information goods from software to medicines ^ morozov, evgeny ( december ). "data populists must seize our information – for the benefit of us all". the guardian. retrieved december . ^ sundararajan, arun ( ). the sharing economy: the end of employment and the rise of crowd-based capitalism. cambridge: mit press. p.  . ^ sundararajan, arun ( ). the sharing economy: the end of employment and the rise of crowd-based capitalism. cambridge: mit press. p.  . v t e sharing economy companies transportation alopeyk blablacar beat bolt bykea didi fasten free now gett gojek grab indriver kakaotaxi lyft ola cabs snapp swvl turo uber careem via wingly waymo yourdrive pathao yandex.taxi tapsi hospitality for-profit airbnb couchsurfing non-profit bewelcome pasporta servo servas international trustroots warm showers project funding gofundme indiegogo kickstarter patreon retail craigslist mercadolibre vinted kijiji music and film sofar sounds tudou services and freelancing pickle taskrabbit thumbtack swap and renting erento the freecycle network streetbank tourism toursbylocals withlocals concepts social peer-to-peer processes peer-to-peer banking peer-to-peer carsharing peer-to-peer lending peer-to-peer ridesharing barter bicycle-sharing system blockchain book swapping borrowing center decentralization carpool carsharing clothing swap co-living collaborative consumption crowdfunding crowdsourcing expert network flight sharing garden sharing home exchange homestay open innovation platform cooperative platform economy product-service system reuse ridesharing company scooter-sharing system tool library two-sided market uberisation upcycling wiki labour india indian federation of app-based transport workers united states national taxi workers' alliance (ny) rideshare drivers united (ca) retrieved from "https://en.wikipedia.org/w/index.php?title=platform_cooperative&oldid= " categories: cooperatives sharing economy business models hidden categories: cs maint: archived copy as title cs errors: missing title cs errors: bare url all articles with dead external links articles with dead external links from may articles with permanently dead external links articles with short description short description matches wikidata navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages Ελληνικά español français bahasa indonesia italiano português suomi edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement the open library blog | a web page for every book the open library blog a web page for every book skip to content about « older posts open library tags explained—for readers seeking buried treasure by nicknorman | published: june , as part of an open-source project, the open library blog has a growing number of contributors: from librarians and developers to designers, researchers, and book lovers. each contributor writes from their perspective, sharing contributions they’re making to the open library catalog. as such, the open library blog has a versatile tagging system to help patrons navigate such a diverse and wide range of content. read more » posted in community, librarianship, open source | tagged community, features, nick norman | leave a comment introducing the open library explorer by mek | published: december , try it here! if you like it, share it. bringing years of librarian-knowledge to life by nick norman with drini cami & mek at the library leaders forum (demo), open library unveiled the beta for what it’s calling the library explorer: an immersive interface which powerfully recreates and enhances the experience of navigating a physical library. if the tagline doesn’t grab your attention, wait until you see it in action: drini showcasing library explorer at the library leaders forum get ready to explore in this article, we’ll give you a tour of the open library explorer and teach you how one may take full advantage of its features. you’ll also get a crash course on the + years of library history which led to its innovation and an opportunity to test-drive it for yourself. so let’s get started!   read more » posted in community, interface/design, librarianship | tagged features, nick norman, openlibrary | comments closed importing your goodreads & accessing them with open library’s apis by mek | published: december , by mek today joe alcorn, founder of readng, published an article (https://joealcorn.co.uk/blog/ /goodreads-retiring-api) sharing news with readers that amazon’s goodreads service is in the process of retiring their developer apis, with an effective start date of last tuesday, december th, . a screenshot taken from joe alcorn’s post the topic stirred discussion among developers and book lovers alike, making the front-page of the popular hacker news website. hacker news at - - : pm pacific. read more » posted in uncategorized | tagged apis | comments closed on bookstores, libraries & archives in the digital age by brewster kahle | published: october , the following was a guest post by brewster kahle on against the grain (atg) – linking publishers, vendors, & librarians read more » posted in discussion, librarianship, uncategorized | tagged founder | comments closed amplifying the voices behind books with the power of data by mek | published: september , exploring how open library uses author data to help readers move from imagination to impact by nick norman, edited by mek & drini image source: pexels / pixabay from popsugar according to rené descartes, a creative mathematician, “the reading of all good books is like a conversation with the finest [people] of past centuries.” if that’s true, then who are some of the people you’re talking to? read more » posted in community, cultural resources, data | tagged drini, features, mek, nick norman | comments closed open library is an initiative of the internet archive, a (c)( ) non-profit, building a digital library of internet sites and other cultural artifacts in digital form. other projects include the wayback machine, archive.org and archive-it.org. your use of the open library is subject to the internet archive's terms of use. « older posts search recent posts open library tags explained—for readers seeking buried treasure introducing the open library explorer importing your goodreads & accessing them with open library’s apis on bookstores, libraries & archives in the digital age amplifying the voices behind books with the power of data archives archives select month june december october september august july may november october january october august july june may march december october june may february january november february january december november october august july june may april march april january august december november october july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november theme customized from thematic theme framework. the butter and egg man - wikipedia the butter and egg man from wikipedia, the free encyclopedia jump to navigation jump to search the butter and egg man first edition written by george s. kaufman date premiered september  ,   ( - - ) place premiered longacre theatre, new york city, new york, us original language english setting office of lehmac productions in new york city; a hotel in syracuse the butter and egg man is a play by george s. kaufman, the only play he wrote without collaborating. it was a broadway hit during the – season at the longacre theatre.[ ] adapted to film six times, it is still performed on stages today. contents synopsis production . revivals reception adaptations references external links synopsis[edit] gregory kelly (center) in a scene from the broadway production of the butter and egg man ( ) the play's title, of course, is broadwayese of the moment—a generic name for all those gentlemen who come trustingly to gotham with bankrolls, bent upon either reckless expenditure or equally reckless investment. — from the dust jacket for the first edition of the butter and egg man (boni & liveright, )[ ] a s slang term popularized by texas guinan,[ ] a butter-and-egg man is a traveling businessman eager to spend large amounts of money in the big city[ ]—someone wealthy and unwary.[ ] a souvenir booklet for the original production of the butter and egg man devoted an entire page to the various claims of origin for the phrase.[ ] peter jones is a young man who arrives on broadway from chillicothe, ohio, hoping to invest $ , in a play and turn a profit sufficient to buy a local hotel back home. he is conned by joe lehman and jack mcclure into backing their play with a -percent stake. the play opens out-of-town in syracuse and bombs. lehman and mcclure want out, and jones buys them out, and revamps the play into a huge hit. jones then sells back to them at a huge profit after learning of claims that the play was stolen, and returns home to get his hotel.[ ] kaufman's comedy may be seen as a precursor to mel brooks' the producers.[ ] production[edit] james gleason directed the broadway production of the butter and egg man, in which his wife lucile webster (center) appeared with gregory kelly and sylvia field. promotional theatre token for the touring production of the butter and egg man ( ) gregory kelly as peter jones in the butter and egg man ( ) scene from the production of the butter and egg man at otterbein university produced by crosby gaige, the butter and egg man opened at the longacre theatre on september , , and played for performances.[ ] james gleason directed the following cast:[ ] gregory kelly as peter jones sylvia field as jane weston robert middlemass as joe lehman lucile webster as fanny lehman john a. butler as jack mcclure marion barney as mary martin tom fadden as a waiter harry neville as cecil benham harry stubbs as bernie sampson eloise stream as peggy marlowe puritan townsend as kitty humphries denman maley as oscar fritchie george alison as a.j. patterson when the broadway run of the play ended in april , gregory kelly starred in its national tour. kelly had a heart attack in pittsburgh in february , and the tour was abandoned.[ ] kelly was transferred to a new york city sanitarium by his wife, actress ruth gordon, but he was unable to recover and died july , , at age .[ ][ ] the london premiere of the butter and egg man took place august , , at the garrick theatre. presented by an american company, the play was performed times, closing on september , . robert middlemass reprised his broadway role as joe lehman.[ ] revivals[edit] recent stagings of the play include punch line theatre in manhattan in ,[ ] and by the off broadway atlantic theater company in .[ ][ ] reception[edit] new york critics were unanimous in their praise for the butter and egg man.[ ] in his review of the play's premiere, gilbert w. gabriel wrote in the sun that it was "the wittiest and liveliest jamboree of the behind-the-scenes ever distilled from the atmosphere of broadway."[ ] walter winchell wrote that "first nighters roared at the dialogue". "the audience nearly laughed itself to death", wrote john anderson of the new york evening post. alexander woollcott called the play "richly and continuously amusing". "if you like smart, funny, sentimental, satirical comedies, here is a chance to enjoy yourself", wrote percy hammond of the new york herald tribune.[ ] adaptations[edit] the butter and egg man has been adapted for motion pictures six times:[ ] it was first adapted for the screen in (with the same name), a silent film directed by richard wallace.[ ] the american comedy film the tenderfoot is based on the play.[ ] the british film hello, sweetheart is based on the play. the american film dance charlie dance is based on the play. the american film an angel from texas, which included ronald reagan in its cast, is based on the play.[ ] the musical film three sailors and a girl is based on the play p. g. wodehouse adapted the play for his novel, barmy in wonderland. references[edit] ^ a b c gussow, mel (december , ). "theatre: the butter and egg man". the new york times. retrieved - - . ^ "the butter and egg man". facsimile dust jackets llc. retrieved - - . ^ a b bordman, gerald & thomas s. hischak. the oxford companion to american theatre, p. ( d ed. ) ^ allen, irving lewis ( ). the city in slang: new york life and popular speech. oxford university press. p.  . isbn  - - - . ^ "souvenir book for 'the butter and egg man'" (pdf). long island news and owl. december , . p.  . retrieved - - . ^ "the butter and egg man ( )". george s. kaufman. retrieved - - . ^ "the butter and egg man". internet broadway database. retrieved - - . ^ kaufman, george s. ( ) [ ]. the butter and egg man: a play in three acts. samuel french. p.  . ^ "gregory kelly ill; play tour ends". the new york times. february , . retrieved - - . ^ "gregory kelly, actor, dies at ". the new york times. july , . retrieved - - . ^ robinson, lauren (may , ). "untimely deaths of stage performers". mcny blog: new york stories. museum of the city of new york. retrieved - - . ^ wearing, j.p. the london stage - : a calendar of productions, performers, and personnel, p. ( ) ^ weber, bruce ( october ). george s. kaufman's jet-paced solo flight, the new york times ^ isherwood, charles. ( oct ) review: the butter and egg man, variety ^ a b crosby gaige introduces george s. kaufman's new comedy the butter and egg man with gregory kelly ( ). promotional brochure for the original production at the longacre theatre, new york city. ^ gabriel, gilbert w. ( sept ). the drama's dairy godfather. kaufman's "the butter and egg man" sells a thousand laughs at our theater's expense, the sun, p. , col. ^ goble, alan. the complete index to literary sources in film, p. ( ) (books lists six films based on the play: the butter and egg man ( ), the tenderfoot ( ), hello sweetheart ( ), dance charlie dance ( ), angel from texas ( ), and three sailors and a girl ( ) ^ hall, mordaunt ( aug ). the screen: the worm that turned, the new york times ^ hall, mordaunt ( may ). joe e. brown in a boisterous film conception of the stage comedy, "the butter and egg man.", the new york times ^ crowther, bosley ( may ). an angel from texas (review), the new york times external links[edit] wikimedia commons has media related to the butter and egg man. the butter and egg man at the internet broadway database the butter and egg man at georgekaufman.com v t e george s. kaufman plays and musicals some one in the house ( ) dulcy ( ) to the ladies ( ) the ' ers ( ) merton of the movies ( ) helen of troy, new york ( ) the deep tangled wildwood ( ) beggar on horseback ( ) be yourself ( ) minick ( ) the butter and egg man ( ) the cocoanuts ( ) the good fellow ( ) the royal family ( ) animal crackers ( ) june moon ( ) the channel road ( ) once in a lifetime ( ) the band wagon ( ) of thee i sing ( ) dinner at eight ( ) let 'em eat cake ( ) the dark tower ( ) merrily we roll along ( ) first lady ( ) stage door ( ) you can't take it with you ( ) i'd rather be right ( ) the fabulous invalid ( ) the american way ( ) the man who came to dinner ( ) george washington slept here ( ) the land is bright ( ) the late george apley ( ) seven lively arts ( ) hollywood pinafore ( ) bravo! ( ) the small hours ( ) fancy meeting you again ( ) the solid gold cadillac ( ) silk stockings ( ) musicals based on his plays sherry! ( ) merrily we roll along ( ) films someone must pay ( ) someone in the house ( ) dulcy ( ) to the ladies ( ) merton of the movies ( ) welcome home ( ) beggar on horseback ( ) the butter and egg man ( ) the cocoanuts ( ) not so dumb ( ) animal crackers ( ) the royal family of broadway ( ) june moon ( ) the expert ( ) the tenderfoot ( ) make me a star ( ) once in a lifetime ( ) dinner at eight ( ) roman scandals ( ) the man with two faces ( ) you can't take it with you ( ) films as a director the senator was indiscreet ( ) retrieved from "https://en.wikipedia.org/w/index.php?title=the_butter_and_egg_man&oldid= " categories: broadway plays plays american plays adapted into films plays by george s. kaufman hidden categories: cs : julian–gregorian uncertainty commons category link is on wikidata navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages add links this page was last edited on december , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement arthur j. cramp - wikipedia arthur j. cramp from wikipedia, the free encyclopedia jump to navigation jump to search british-born american medical doctor, researcher, and writer ( – ) arthur j. cramp born september london, england died november (aged ) hendersonville, north carolina nationality english alma mater wisconsin college of physicians and surgeons occupation medical researcher and writer known for director of the bureau of investigation of the american medical association spouse(s) lillian torrey children torrey arthur joseph cramp (september , – november , ) was a medical doctor, researcher, and writer. he served as director of the american medical association's (ama) propaganda for reform department (later, the bureau of investigation and, then the department of investigation)[ ] from to . he was a regular contributor to the journal of the american medical association (jama).[ ] cramp was "a bitter opponent of proprietary and medicinal abuses."[ ] his three volume series on 'nostrums and quackery', along with his public lectures to schools, professional groups, and civic organizations across the country,[ ] helped bring awareness to the problem of patent medicines or nostrums, by subjecting the claims (made by predominantly non-medical people) to scientific analysis. he was critical the pure food and drug act, and advocated stronger regulation of product labeling and advertising.[ ] in an article announcing his death, the ama called him "a pioneer in the fight against quackery and fraud in the healing arts."[ ] contents early life and education career nostrums and quackery memberships personal life death selected articles books references further reading early life and education[edit] arthur joseph cramp was born in london, england.[ ] his father was a blacksmith. he received his "preliminary education" in england[ ] before moving to the united states in his late teens,[ ] around .[ ] cramp, purportedly, decided to enter medical school after his infant daughter became ill and was treated by a quack. she subsequently died.[ ][ ] cramp received his training as a medical doctor from the wisconsin college of physicians and surgeons at milwaukee, where he graduated in .[ ][ ][ ] career[edit] cramp taught science at the high school level in milwaukee, wisconsin[ ][ ] and at the seminary and the maryville, missouri high school.[ ] he also worked at the wisconsin industrial school for boys, a reformatory high school in waukesha, wisconsin before entering medical school.[ ] while at the wisconsin college of physicians and surgeons, cramp worked as an assistant in chemistry.[ ] cramp joined the american medical association staff in as an editorial assistant.[ ] he then became the director for the newly-formed propaganda for reform department.[ ] cramp made it his mission to correspond with professionals and members of the public regarding medical treatments, products, and the business practices of individuals and companies involved in marketing them.[ ] his office also maintained a laboratory for testing various products.[ ] he wrote about many of these interactions and investigations in the journal of the american medical association and hygeia, a health magazine.[ ][ ] by , cramp's "fake file," listing "products, firms, and names of promoters", contained over , entries. he kept a "testimonial file" for doctors who endorsed proprietary drugs through testimonials; over , american doctors and , foreign doctors.[ ] his office became a clearing house for information regarding untested and, sometimes, dangerous practices. his department was aware of the health risks, as well as the financial losses to consumers who were duped by fake medicine vendors.[ ] cramp advocated truth in advertising, particularly for general consumption (patent) medicines containing "secret formulas,"[ ][ ] including alcohol.[ ] he and his office called for the standardization of medicines (ingredients and dosages) and educating the public on appropriate use. he wrote, "when the public is properly informed, so that it knows what preparations to call for in order to treat its simpler ailments, advertising of secret remedies will be entirely unnecessary."[ ] he considered the emotive nature of radio advertisements of quack medicine more harmful than newspaper advertisements. according to cramp, unlike radio, newspapers had "developed standards of decency and censorship" when determining whether or not to run the advertisements.[ ] the pure food and drugs act of , followed by the sherley amendment, was an attempt to address these issues.[ ] however, cramp warned that federal legislation attempting to address false advertising and interstate trafficking of products did not fully protect the public.[ ] "no man has any moral right to so advertise as to make well persons think they are sick and sick persons thing they are very sick. such advertising is an offense against the public health." — arthur j. cramp[ ][ ] in , cramp retired from the bureau due to ill health, after suffering from a heart attack in .[ ] upon hearing of his retirement, the british medical journal published this statement: "the quack nostrum trade is international in its activities, and the british medical profession owes a great debt to dr. cramp for providing it with the information necessary for combating both home-produced and imported frauds. we can only state our thanks and express the hope that he will enjoy the leisure he has earned by his many years of strenuous combat."[ ] nostrums and quackery[edit] in , cramp published the first of three volumes called nostrums and quackery,[ ] which would become "a veritable encyclopedia on the nostrum evil and quackery."[ ] the first volume contained the educational materials, case histories, and testimonials his department had been collecting.[ ] nostrums and quackery, volume ii, published in , was a collection of legal reports of case law involving nostrums and patent medicine reprinted from the journal of the american medical association meant to educate the general public. as reviewer joseph macqueen stated, "the matter that appears has been prepared and written in no spirit of malice, and with no object except that of laying before the public certain facts, the knowledge of which is essential to a proper concept of community health."[ ] cramp's nostrums and quackery and pseudo-medicine, volume iii, foreword by george h. simmons, editor emeritus of the journal of the american medical association,[ ] was published in . as described in the science news-letter, the book contained "terse, simple and factual accounts of hundreds of nostrums and the ways of pseudo-medical practitioners."[ ] this volume, more condensed than the first two volumes, indexed , "remedies."[ ] w.a. evans, in his review, wrote "when you have read this book you will consider credulity based on fiction rather drab."[ ] a sampling of "quack cures" which cramp included in his books and lectures: deafness "cures" (subjecting individuals with hearing loss to airplane nose-dives),[ ] beauty "cures" (hair dyes, freckle removers, and reducing lotions containing harmful ingredients or promoted with false claims about their efficacy),[ ]{ obesity "cures" (including tapeworms, products containing dinitrophenol, arsenic, and other dangerous substances),[ ] cancer "quackery" (alternate cancer therapies),[ ] "consumption cure quackery" (elixirs from a bottle whose "alleged cures for consumption are born weekly"),[ ] and the wilshire i-on-a-co (a magnetic belt purported to cure cancer, bright's disease and paralysis, pernicious anemia to health, deafness, muteness, and st. vitus' dance).[ ][ ] "the remedy for the menace of the fake consumption cure is education - and more education. people are gullible not because they lack brains, but because they lack knowledge. iteration and reiteration of the fundamental facts regarding the prevention and cure of tuberculosis is the only way of overcoming the present toll of human life taken by the consumption quack cure." — arthur j. cramp[ ] memberships[edit] as reported in jama,[ ] cramp was a member of the following: associate fellow of the american medical association indiana state medical association society of medical history of chicago institute of medicine of chicago royal institute of public health chicago ornithology society phi rho sigma chicago library club personal life[edit] cramp was married to lilly torrey of skidmore, missouri,[ ] daughter of l.n. torrey.[ ] they had a daughter, torrey, who died on january , . the infant's death was caused by seizures related to meningitis.[ ] death[edit] cramp died on november , , in hendersonville, north carolina. he was . the cause of death was, reportedly, arteriosclerosis and urema.[ ] selected articles[edit] modern advertising and the nostrum ( )[ ] the nostrum and the public health ( )[ ] self-doctoring ( )[ ] patent medicines: what is a 'patent medicine' and why? ( ) [ ] patent medicines: what protection does the national food and drugs act give? ( )[ ] therapeutic thaumaturgy ( )[ ] i-on-a-co - the magic horse collar? ( )[ ] the nostrum and the public health ( )[ ] the bureau of investigation of the american medical association ( )[ ] the work of the bureau of investigation ( )[ ] salts and crystals quackery ( )[ ] books[edit] nostrums and quackery: articles on the nostrum evil, quackery and allied matters affecting the public health, volume ( ) nostrums and quackery: articles on the nostrum evil, quackery and allied matters affecting the public health, volume ( ) nostrums and quackery and pseudo-medicine, volume ( ) references[edit] ^ a b c d e f g h i j k blaskiewicz, robert; jarsulic, mike (november ). "arthur j. cramp: the quackbuster who professionalized american medicine". skeptical inquirer. ( ): – . retrieved december . ^ a b c d e f g h "deaths. cramp, arthur joseph". jama. ( ): . december , . doi: . /jama. . . ^ a b jackson, charles o. ( ). "through the looking glass". food and drug legislation in the new deal. princeton university press. pp.  – . jstor j.ctt x b x. . ^ a b c d "dr. cramp to speak here". dubuque telegraph herald and times journal. dubuque, iowa. november , . p.  . ^ a b c d e f g h i young, james harvey ( ). "the new muckrackers". the medical messiahs: a social history of medical quackery in th century america. princeton university press. pp.  – . jstor j.ctt x dp . . ^ fishbein, morris (december ). "the protection of the consumer of food and drugs: a symposium". law and contemporary problems. duke university school of law. ( ): – . jstor  . ^ a b "dr. arthur j. cramp". the maryville daily forum ( ( )). november , . p.  . ^ a b c d "the work of dr. cramp". the british medical journal. bmj. ( ): . march , . jstor  . ^ "medical news". the british journal. bmj. ( ): . april , . jstor  . ^ gibbons, roy j. (march , ). "pink pills and quacks cost americans millions". iowa city press citizen. iowa city, iowa. p.  . ^ a b cramp, arthur j. (december ). "the work of the bureau of investigation". law and contemporary problems. ( ): – . doi: . / . jstor  . ^ "alky in medicine defeats dry law: chicago doctors say prohibition won't be success until it's removed". mitchell evening republic. south dakota. may , . p.  . ^ "would bar 'quacks' from radio talks: doctors urge federal commission and broadcasters to revise programs". new york times. may , . p.  . ^ "health notes". santa ana register. santa ana, california. april , . p.  . ^ "dr. cramp declares promoting sale of "patent medicines" denounces advertising of so-called cures". the ogden standard. ogden, utah. october , . p.  . ^ macqueen, joseph (january , ). "books". oregonian. portland, oregon: morning oregonian. p.  . ^ "first glances at new books". the science news-letter. ( ): – . april , . jstor  . ^ evans, dr. w.a. (march , ). "how to keep well". times-picayune. new orleans, louisiana. p.  . ^ a b "ridicules flying as deafness cure: dr. cramp tells association for hard of hearing air stunts may be harmful". new york times. june , . p.  . retrieved december . ^ farrell, amy edman ( ). "fat, modernity, and the problem of excess". fat shame: stigma and the fat body in american culture. nyu press. pp.  – . jstor j.ctt qg v . . ^ clow, barbara ( ). "the contours of legitimate medicine: doctors, alternative practitioners, and cancer". negotiating disease: power and cancer care, - . mcgill-queen's university press. jstor j.ctt vkn. . ^ a b "quackery cure for tb scored". the daily herald. biloxi, mississippi. september , . p.  . ^ fishbein, morris (october ). "the month in midical science". scientific american. ( ): – . bibcode: sciam. .. f. doi: . /scientificamerican - . jstor  . ^ davis, jr., donald g. (december ). "the ionaco of gaylord wilshire". southern california quarterly. ( ): – . doi: . / . jstor  . ^ "mrs. cramp visits here". maryville daily democrat forum. maryville, missouri. august , . ^ cramp, arthur j. (october , ). "modern advertising and the nostrum". american journal of public health. ( ): – . doi: . /ajph. . . . pmc  . pmid  . ^ cramp, arthur j. (may , ). "the nostrum and the public health". jama. ( ): . doi: . /jama. . . retrieved december . ^ cramp, arthur j. (may , ). "self-doctoring". new castle news. new castle, pennsylvania. ^ cramp, arthur j. (april ). "patent medicines: what is a 'patent medicine' and why?". hygeia. chicago, il: american medical association. ( ): – . ^ cramp, arthur j. (may ). "patent medicines: what protection does the national food and drugs act give?". hygeia. ( ): . ^ cramp, arthur j. ( ). "therapeutic thaumaturgy". american mercury. : – . ^ cramp, arthur j. (february ). "i-on-a-co - the magic horse collar?". hygeia. v: . ^ cramp, arthur j. (december , ). "the nostrum and the public health". new england journal of medicine. ( ): – . doi: . /nejm . ^ cramp, arthur j. (july–august ). "the bureau of investigation of the american medical association". the american journal of police science. ( ): – . doi: . / . jstor  . ^ cramp, arthur j. (july ). "salts and crystals quackery". hygeia. : . further reading[edit] eric w. boyle. ( ). quack medicine: a history of combating health fraud in twentieth-century america. praeger. pp. - . isbn  - - - - harvey young, james ( ). "arthur cramp: quackery foe". pharmacy in history. ( ): – . retrieved from "https://en.wikipedia.org/w/index.php?title=arthur_j._cramp&oldid= " categories: births deaths th-century american physicians american skeptics american medical writers american people of english descent critics of alternative medicine hidden categories: cs : julian–gregorian uncertainty articles with short description short description is different from wikidata articles with hcards navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages add links this page was last edited on may , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course | cognitive research: principles and implications | full text skip to main content advertisement search get published explore journals books about my account search all springeropen articles search cognitive research: principles and implications about articles submission guidelines download pdf original article open access published: march improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course jessica e. brodsky  orcid: orcid.org/ - - - , , patricia j. brooks  orcid: orcid.org/ - - - , , donna scimeca , ralitsa todorova , peter galati , michael batson , robert grosso , michael matthews , victor miller & michael caulfield   cognitive research: principles and implications volume  , article number:  ( ) cite this article accesses altmetric metrics details abstract college students lack fact-checking skills, which may lead them to accept information at face value. we report findings from an institution participating in the digital polarization initiative (dpi), a national effort to teach students lateral reading strategies used by expert fact-checkers to verify online information. lateral reading requires users to leave the information (website) to find out whether someone has already fact-checked the claim, identify the original source, or learn more about the individuals or organizations making the claim. instructor-matched sections of a general education civics course implemented the dpi curriculum (n =  students) or provided business-as-usual civics instruction (n =  students). at posttest, students in dpi sections were more likely to use lateral reading to fact-check and correctly evaluate the trustworthiness of information than controls. aligning with the dpi’s emphasis on using wikipedia to investigate sources, students in dpi sections reported greater use of wikipedia at posttest than controls, but did not differ significantly in their trust of wikipedia. in dpi sections, students who failed to read laterally at posttest reported higher trust of wikipedia at pretest than students who read at least one problem laterally. responsiveness to the curriculum was also linked to numbers of online assignments attempted, but unrelated to pretest media literacy knowledge, use of lateral reading, or self-reported use of lateral reading. further research is needed to determine whether improvements in lateral reading are maintained over time and to explore other factors that might distinguish students whose skills improved after instruction from non-responders. introduction young adults (ages –  years) and individuals with at least some college education are the highest internet users in the usa (pew research center, a). these groups are also most likely to use at least one social media site (pew research center, b). despite their heavy internet and social media use, college students rarely “read laterally” to evaluate the quality of the information they encounter online (mcgrew et al., ). that is, students do not attempt to seek out the original sources of claims, research the people and/or organizations making the claims, or verify the accuracy of claims using fact-checking websites, online searches, or wikipedia (wineburg & mcgrew, ). the current study reports findings from one of eleven colleges and universities participating in the digital polarization initiative (dpi), a national effort by the american democracy project of the american association of state colleges and universities to teach college students information-verification strategies that rely on lateral reading for online research (american democracy project, n.d; caulfield, a). the dpi curriculum was implemented across multiple sections of a general education civics course, while other sections taught by the same instructors received the “business-as-usual” civics curriculum. we evaluated the impact of the dpi curriculum on students’ use of lateral reading to accurately assess the trustworthiness of online information, as well their use and trust of wikipedia. we also examined factors that might influence whether students showed gains in response to the curriculum, such as their prior media literacy knowledge. how do fact-checkers assess the trustworthiness of online information? fact-checking refers to a process of verifying the accuracy of information. in journalism, this process occurs internally before publication as well as externally via articles evaluating the accuracy of publicly available information (graves & amazeen, ). ethnographic research on the practices of professional fact-checkers found that fact-checking methodology involves five steps: “choosing claims to check, contacting the speaker, tracing false claims, dealing with experts, and showing your work” (graves, , p. ). interest in the cognitive processes and strategies of professional fact-checkers is not surprising in light of concerns about the rapid spread of false information (i.e., “fake news”) via social media platforms (pennycook et al., ; vosoughi et al., ), as well as the emergence of fact-checking organizations during the twenty-first century, especially in the usa (amazeen, ). when assessing the credibility of online information, professional fact-checkers first “take bearings” by reading laterally. this means that they “[leave] a website and [open] new tabs along the browser’s horizontal axis, drawing on the resources of the internet to learn more about a site and its claims” (wineburg & mcgrew, , p. ). this practice allows them to quickly acquire background information about a source. when reading laterally, professional fact-checkers also practice “click restraint,” meaning that they review search engine results before selecting a result and rely on their “knowledge of digital sources, knowledge of how the internet and searches are structured, and knowledge of strategies to make searching and navigating effective and efficient” (wineburg & mcgrew, , p. ). in contrast to professional fact-checkers, both historians and college students are unlikely to read laterally when evaluating online information (wineburg & mcgrew, ). how do college students assess the trustworthiness of online information? how individuals assess the credibility of information has been studied across a variety of fields, including social psychology (e.g., work on persuasion), library and information science, communication studies, and literacy and discourse (see brante & strømsø, for a brief overview). when assessing the trustworthiness of online social and political information, college students tend to read vertically. this means that they look at features of the initial webpage for cues about the reliability of the information, such as its scientific presentation (e.g., presence of abstract and references), aesthetic appearance, domain name and logo, and the usefulness of the information (brodsky et al., ; mcgrew et al., ; wineburg & mcgrew, ; wineburg et al., ). college students’ use of non-epistemic judgments (i.e., based on source features) rather than epistemic judgments (i.e., based on source credibility or corroboration with other sources) has also been observed in the context of selecting sources to answer a question and when ranking the reliability of sources (list et al., ; wiley et al., ). when provided with opportunities to verify information, adults (including college students) rarely engage in online searches and when they do, they usually stay on google’s search results page (donovan & rapp, ). while looking for information, college students rely on the organization of search engine results and prior trust in specific brands (e.g., google) for cues about the credibility of the information (hargittai et al., ). low search rates, superficial search behaviors, and reliance on cognitive heuristics (e.g., reputation, endorsement by others, alignment with expectations) may be indicative of a lack of ability or lack of motivation to engage in critically evaluating the credibility of online information. according to the dual processing model of credibility assessment, use of more effortful evaluation strategies depends on users’ knowledge and skills, as well as their motivation (metzger, ; metzger & flanagin, ). drawing on the heuristic-systematic model of information processing (chen & chaiken, ), metzger and colleagues argue that the need for accuracy is one factor that motivates users to evaluate the credibility of information. users are more likely to put effort into evaluating information whose accuracy is important to them. in cases where accuracy is less important, they are likely to use less effortful, more superficial strategies, if any strategies at all. teaching college students to read laterally the current study focuses on teaching college students to read laterally when assessing the trustworthiness of online information. however, a number of other approaches have already been used to foster students’ credibility evaluation knowledge and skills. lateral reading contrasts with some of these approaches and complements others. for example, teaching students to quickly move away from the original content to consult other sources contrasts with checklist approaches that encourage close reading of the original content (meola, ). one popular checklist approach is the craap test, which provides an extensive list of questions for examining the currency, relevance, authority, accuracy, and purpose of online information (blakeslee, ; musgrove et al., ). on the other hand, lateral reading complements traditional sourcing interventions that teach students how to identify and leverage source information when assessing multiple documents (brante & strømsø, ). more specifically, lateral reading instruction emphasizes that students need to assemble a collection of documents in order to be able to assess information credibility, identify biases, and corroborate facts. lateral reading also aligns with aims of media, news, and information literacy instruction. media literacy instruction teaches students how to access, analyze, evaluate, create, reflect, and act on media messages as means of both protecting and empowering them as media consumers and producers (hobbs, , ). media literacy interventions can increase students’ awareness of factors that may affect the credibility of media messages, specifically that media content is created for a specific audience, is subject to bias and multiple interpretations, and does not always reflect reality (hobbs & jensen, ; jeong et al., ). these media literacy concepts also apply in the context of news media (maksl et al., ). lateral reading offers a way for students to act on awareness and skepticism fostered through media and news literacy interventions by leaving the original messages in order to investigate sources and verify claims. while media and news literacy instruction focuses on students’ understanding of and interactions with media content, information literacy instruction teaches students how to search for and verify information online (koltay, ). being information literate includes understanding that authority is constructed and contextual and “us[ing] research tools and indicators of authority to determine the credibility of sources, understanding the elements that might temper this credibility” (association of college & research libraries, , p. ). lateral reading offers one means of investigating the authority of a source, including its potential biases (faix & fyn, ). lateral reading is also a necessary component of “civic online reasoning” during which students evaluate online social and political information by researching a source, assessing the quality of evidence, and verifying claims with other sources (mcgrew et al., ). mcgrew et al. ( ) conducted a pilot study of a brief in-class curriculum for teaching undergraduate students civic online reasoning. one session focused explicitly on teaching lateral reading to learn more about a source, while the second session focused on examining evidence and verifying claims. civic online reasoning was assessed using performance-based assessments similar to those used in their study (mcgrew et al., ). students who received the curriculum were more likely to make modest gains in their use of civic online reasoning, as compared to a control group of students who did not receive the curriculum. aligning with this approach, the american democracy project of the american association of state colleges and universities organized the digital polarization initiative (dpi; american democracy project, n.d.) as a multi-institutional effort to teach college students how to read laterally to fact-check online information. students were instructed to practice four fact-checking “moves”: ( ) “look for trusted work” (search for other information on the topic from credible sources), ( ) “find the original” (search for the original version of the information, particularly if it is a photograph), ( ) “investigate the source” (research the source to learn more about its agenda and biases), and ( ) “circle back” (be prepared to restart your search if you get stuck) (caulfield, a). because emotionally arousing online content is more likely to be shared (berger & milkman, ), students were also taught to “check their emotions,” meaning that they should make a habit of fact-checking information that produces a strong emotional response. in the current study, we were interested in fostering students’ use of lateral reading to accurately assess the trustworthiness of online content. therefore, we focused specifically on students’ use of the first three fact-checking “moves.” these moves are all examples of lateral reading, as they require students to move away from original content and conduct searches in a new browser window (wineburg & mcgrew, ), and align with the practices of professional fact-checkers. while the dpi curriculum also taught the move of “circling back” and encouraged students to adopt the habit of “checking their emotions,” this move and habit are difficult to assess through performance-based measures and were not the focus of the assessments or analyses presented here. research objectives we present results from an efficacy study that used the american democracy project’s dpi curriculum to teach college students fact-checking strategies through lateral reading instruction. students in several sections of a first-year, general education civics course received the dpi curriculum in-class and completed online assignments reinforcing key information and skills, while other sections received the “business-as-usual” civics instruction. we were interested in whether students who received the dpi curriculum would be more likely to use lateral reading to correctly assess the trustworthiness of online content at posttest, as compared to “business-as-usual” controls. additionally, we wanted to know the extent to which attempting the online assignments, which reviewed the lateral reading strategies and provided practice exercises, contributed to students’ improvement. as part of the analyses, we controlled for prior media literacy knowledge. even though media literacy has not been tied directly to the ability to identify fake news (jones-jang et al., ), students with greater awareness of the media production process and skepticism of media coverage may be more motivated to investigate online content. as part of the team implementing the dpi curriculum, we were provided with performance-based assessments like the ones used by mcgrew et al. ( ) and mcgrew et al. ( ) to assess students’ lateral reading at pretest and posttest. these types of assessments are especially critical given findings that college students’ self-reported information evaluation strategies are often unrelated to their observed behaviors (brodsky et al., ; hargittai et al., ; list & alexander, ). in light of previous research on the disconnect between students’ self-reported and observed information-evaluation behaviors, we also examined whether students who received the dpi curriculum were more likely to self-report use of lateral reading at posttest, as compared to “business-as-usual” controls. in the dpi curriculum, one of the sources that students are encouraged to consult when reading laterally is wikipedia. even though they are often told by secondary school teachers, librarians, and other college instructors that wikipedia is an unreputable source (garrison, ; konieczny, ; polk et al., ), students may rely on wikipedia to acquire background information on a topic at the start of their searches (head & eisenberg, ). therefore, we were interested in whether college students who received the dpi curriculum would report higher use of and trust of wikipedia at posttest, as compared to “business-as-usual” controls. lastly, for students who received the dpi curriculum, we explored factors that might distinguish students who used lateral reading to correctly assess the trustworthiness of online content at posttest from their classmates who did not read laterally. in an effort to distinguish groups, we compared students on their use of lateral reading at pretest and their self-reported use of lateral reading at pretest. we also examined group differences in general media literacy knowledge at pretest, use of and trust of wikipedia at pretest, and number of online homework assignments attempted. methods participants first-year college students (n =  ) enrolled in a general education civics course at a large urban public university in the northeastern usa took part in the study. the university has an open-admission enrollment policy and is designated as a hispanic-serving institution. students took classes at main and satellite campuses, both serving mostly commuter students. participants’ self-reported demographics are presented in table . almost half ( . %) were first-generation students (i.e., neither of their parents attended college). table participants’ self-reported demographics for matched sections (n =  ; ndpi =  , ncontrol =  )full size table prior to the outset of the semester, the course instructors received training in the dpi curriculum and met regularly throughout the semester to go over lesson plans and ensure fidelity of instruction. four instructors taught “matched” sections of the civics course, i.e., at least one section that received the dpi curriculum and at least one section that was a “business-as-usual” control. two of the instructors taught one dpi section and one control section at the main campus, one instructor taught one dpi and one control section at the satellite campus, and one instructor taught one dpi and one control section at the main campus and one dpi section at the satellite campus. across the matched sections, we had n =  students in the five dpi sections and n =  students in the four control sections. the research protocol was classified as exempt by the university’s institutional review board. the dpi curriculum students in dpi and control sections completed the online pretest in week and online posttest in week of a -week semester. the pretest and posttest were given as online assignments and were graded based on completion. for the pretest and posttest, materials were presented in the following order: lateral reading problem set, demographic questions, wikipedia use and trust questions, self-reported use of lateral reading strategies, general media literacy scale, and language background questions. all materials are described below. in the dpi sections, instructors spent three class sessions in weeks and introducing students to the four fact-checking “moves” using two slide decks provided by developers of the dpi curriculum to colleges and universities participating in this american democracy project initiative. a script accompanying the slide decks guided instructors through explaining and demonstrating the moves to students. the slide decks included many examples of online content for instructors and students to practice fact-checking during class. the in-class dpi curriculum drew heavily on concepts and materials from caulfield ( a). in the first slide deck, students were introduced to the curriculum as a way to help them determine the trustworthiness of online information. the four moves (look for trusted work, find the original, investigate the source, and circle back) were framed as “quick skills to help you verify and contextualize web content.” students learned about the difference between vertical and lateral reading in the context of investigating the source. they also practiced applying three of the moves (looking for trusted work, finding the original, and investigating the source) to fact-check images, news stories, and blog posts by using the following techniques: checking google news and fact-checking sites to find trusted coverage of a claim, using reverse image search to find the original version of an image, and adding wikipedia to the end of a search term to investigate a source on wikipedia. in the second slide deck, students reviewed the three moves of looking for trusted work, finding the original, and investigating the source, as well as their associated techniques. students were reminded that the fourth move, circle back, involved restarting the search if their current search was not productive. students then learned that, in addition to using a reverse search to find the original version of an image, they could find the original source of an article by clicking on links. for investigating the source, students were told that they could also learn more about a source by looking for it in google news. the remainder of the slide deck provided a variety of online content for students to practice fact-checking information using the four moves. in weeks and , students in dpi sections spent three class sessions practicing evaluating online content related to immigration. this topic was chosen because it aligned with course coverage of social issues in the usa. students were also given three online assignments to review and practice the strategies at home using online content related to immigration. these online assignments were graded based on completion and are described in detail below. aside from giving the pretest and posttest as online assignments, instructors in control sections followed the standard civics curriculum (i.e., “business as usual”), which focused on the us government, society, and economy, with no mention of lateral reading strategies and/or how to evaluate online content. as students in the control sections did not complete the three interim online homework assignments, the instructors implemented their regular course assignments, such as group projects. pretest, posttest, and online assignments were all administered via qualtrics software with the links posted to the blackboard learning management system. the script, slide decks, and online homework assignments are publicly available in an online repository.footnote lateral reading problems two sets of lateral reading problems (problem sets a and b) were provided by the developers of the dpi curriculum to all campuses. problems were adapted from the stanford history education group’s civic online reasoning curriculum (stanford history education group, n.d.) and from the four moves blog (caulfield, b). to ensure fidelity of implementation across campuses, we did not make any changes to the problem sets. students completed one of the lateral reading problem sets (a or b) as a pretest and the other problem set as a posttest. set order was counterbalanced across instructors: students in sections taught by two instructors received problem set a at pretest and problem set b at posttest, and students in sections taught by the other two instructors received problem set b at pretest and problem set a at posttest. each problem set consisted of one of each of four types of lateral reading problems determined by the developers of the dpi curriculum. the problems in each set included some problems with accurate online content, while other problems featured online content that was less trustworthy. each problem was labeled by its problem type in order to frame the problem, but students could use multiple lateral reading strategies to fact-check each problem. for each problem, students indicated their level of trust in the online content using a likert scale ranging from  = very low to  = very high. students could also indicate that they were unsure (−  ). students were then prompted to “explain the major factors in deciding your level of trust” using an open-response textbox. see table for a list of each problem type, problem set, online content used, and correct trust assessments and fig.  for screenshots of two example problems. table problem type, online content, and correct trust assessment for problem sets a and bfull size table fig. screenshots of two of the lateral reading problems. note: the left panel shows the sourcing evidence problem from problem set a, and the right panel shows the clickbait science and medical disinformation problem from problem set b full size image scoring of lateral reading problems the dpi provided a rubric for scoring student responses to the prompt “explain the major factors in deciding your level of trust”:  = made no effort,  = reacted to or described original content,  = indicated investigative intent, but did not search laterally,  = conducted a lateral search using online resources such as search engines (e.g., google), wikipedia, or fact-checking sites (e.g., snopes, politifact) but failed to correctly evaluate the trustworthiness of the content (i.e., came to the incorrect conclusion or focused on researching an irrelevant aspect of the content to inform their decision), or  = conducted a lateral search and correctly evaluated the trustworthiness of the content. we established inter-rater reliability using the dpi’s rubric by having two authors independently score a randomly selected . % of the responses for each lateral reading problem in each problem set.footnote since we used an ordinal scoring scheme ranging from to , we calculated weighted cohen’s kappa k =  . as a measure of inter-rater agreement, which takes into account the closeness of ratings (cohen, ). all disagreements were resolved through discussion. the authors then divided and independently coded the remaining responses. given the volume of responses, we decided to verify manual scores of using an automated approach. first, we identified keywords that were indicative of use of lateral reading and searched each response for those keywords. keywords were determined using a top-down and bottom-up approach, meaning that some words came from the curriculum, while other words were selected by scanning students’ responses. table presents keywords and sample responses for keywords. responses that used at least one keyword were scored as , indicating that the student read laterally. responses that did not use any keywords were scored , indicating that the student did not read laterally. next, we scored responses on the likert scale asking about the trustworthiness of the online content as for incorrect trust assessment and for correct trust assessment (see table ). lastly, we combined the keyword and trust scores so that indicated no use of lateral reading or use of lateral reading but with an incorrect trust assessment, and indicated use of lateral reading with a correct trust assessment, which was equivalent to a manual score of . table keywords used to automatically score responses for lateral readingfull size table we next reviewed responses where manual and automated scores did not match ( out of responses =  . %, cohen’s kappa k =  . ).footnote twenty-three were false positives (i.e., had an automated score of and a manual score of or less), and were false negatives (i.e., had an automated score of and a manual score of ). in six of the false-negative responses, students expressed a trust assessment in their open-ended response that explicitly contradicted their trust assessment on the likert scale. all disagreements were resolved in favor of the manual scoring. self-reported use of lateral reading strategies students used a -point likert scale ranging from  = never to  = constantly to respond to the prompt “how frequently do you do the following when finding information online for school work?” for the three fact-checking moves requiring lateral reading and the habit of checking their emotions. each move was described using layman’s terms in order to make it clear for students in control sections who were not exposed to the dpi curriculum. look for trusted work was presented as “check the information with another source,” find the original was presented as “look for the original source of the information,” and investigate the source was presented as two items: “find out more about the author of the information” and “find out more about who publishes the website (like a company, organization, or government).” check your emotions was presented as “consider how your emotions affect how you judge the information,” but was not included in analyses because it reflects a habit, rather than a lateral reading strategy. the four-item scale showed good internal consistency at pretest (α = . ). use of wikipedia students were asked to respond to the question “how often do you use wikipedia to check if you can trust information on the internet?” using a -point likert scale ranging from  = never and  = constantly. trust of wikipedia students were asked to respond to the question “to what extent do you agree with the statement that ‘people should trust information on wikipedia’?” using a -point likert scale ranging from  = strongly disagree to  = strongly agree. general media literacy knowledge scale students completed an -item scale ( reverse-scored items) assessing general and news media literacy knowledge (adapted from ashley et al., , and powers et al., ). for each statement, students indicated the extent to which they agreed or disagreed with the statement using a -point likert scale ranging from  = strongly disagree to  = strongly agree. the -item scale showed adequate internal consistency at pretest (α = . ); reliability increased after removing an item with low item-rest correlation (–. ) (α = . ). the -item scale was used in analyses. an exploratory principal components analysis conducted using ibm spss statistics (version ) found four components with clustering primarily based on whether or not the item was reverse-scored.footnote therefore, we interpreted clustering based on reverse-coding to be a statistical artifact and treated the scale as unidimensional. see “appendix” for students’ agreement on each item by condition at pretest. to determine accuracy of students’ media literacy knowledge, scores were recoded such that scores of through were recoded as (inaccurate) and scores of and were recoded as (accurate). “appendix” also reports accuracy on each item by condition at pretest. online homework assignments students in the dpi sections completed three online assignments to practice the lateral reading strategies covered in class. for each assignment, students were prompted to recall the four moves and a habit for reading laterally, saw slides and videos reviewing the four moves and a habit, and practiced using the four moves and a habit to investigate the validity of online content related to immigration, a topic covered in the civics course. online content was selected from the four moves blog (caulfield, b). the first homework assignment asked students to investigate an article from city journal magazine titled “the illegal-alien crime wave” (caulfield, c), the second assignment asked students to investigate a photograph that purported to show a child detained in a cage by us immigration and customs enforcement (caulfield, b), and the last assignment asked students to investigate a facebook post claiming that border patrol demanded that passengers on a greyhound bus show proof of citizenship (caulfield, a). the online assignments are publicly available in an online repository.footnote results results are organized by research questions. all analyses were run in r (version . . ; r core team, ; rstudio team, ). preliminary analyses of lateral reading at pretest prior to conducting analyses to compare students who received the dpi curriculum with “business-as-usual” controls on lateral reading at posttest, we ran a series of preliminary analyses on the pretest data to assist us in formulating the models used to evaluate posttest performance. we first examined whether students’ average scores on lateral reading problems differed by instructor or condition at pretest. for this set of analyses the dependent variable was each student’s average score across the four problems, as assessed via the dpi rubric ( to ). students’ average scores at pretest did not differ significantly by condition (mdpi =  . , sd =  . and mcontrol =  . , sd =  . ; t( ) =  . , p = . ), see table for breakdown by problem and condition. a one-way between-group anova with the instructor as the between-group variable and average score across the four problems as the dependent variable indicated that pretest performance did not differ by instructor (f( , ) =  . , p = . , ηp  =  . ). table mean score for students in each condition for each problem at pretest and posttest (n =  ; ndpi =  , ncontrol =  )full size table at the level of individual students, . % of students received a score of (i.e., read laterally and correctly assessed trustworthiness) for at least one problem at pretest ( . % of students in the dpi sections and . % in the control sections; see table for breakdown by problem type and condition). there was no significant difference across conditions, x ( ) =  . , p = . , or instructor, fisher’s exact test p = . . therefore, to evaluate the effectiveness of the dpi curriculum, we chose to examine differences in students’ scores only at posttest. for the posttest models, we created a control variable to indicate whether or not the student had engaged in lateral reading and drew the correct conclusion about the trustworthiness of the online content on one or more problems at pretest. we also included a control variable for the instructor to account for possible differences in the fidelity of implementation of the dpi curriculum. table percentage of students in each condition who received a score of (i.e., read laterally and drew the correct conclusion about the trustworthiness of the online content) on each problem type at pretest and posttest (n =  ; ndpi =  , ncontrol =  )full size table we next examined whether problem sets a and b and the four types of problems were of equal difficulty at pretest. students’ average score across the four problems did not differ significantly by problem set (mset a =  . , sd =  . and mset b =  . , sd =  . ; t( ) =  . , p = . ). to examine differences in scores by problem type, we conducted a one-way repeated-measures anova with problem type as a within-subject variable and score as the dependent variable. with a greenhouse–geisser correction for lack of sphericity, there was a main effect of problem type, f( . , . ) =  . , p = . , ηp  = . . post hoc tests with tukey adjustment for multiple comparisons indicated that the fake news problem type was harder than the photo evidence problem type (p = . ). all other problem types were of comparable difficulty. for each problem type, sets a and b were of comparable difficulty, except for the sourcing evidence problem type, where set a had an easier problem (m =  . , sd =  . ) than set b (m =  . , sd =  . ), t( . ) =  . , p < . . we retained problem type as a control variable in the posttest models. problem set order was counterbalanced at the level of instructor and therefore fully confounded with instructor (see above); hence, we chose not to include problem set as a control variable in order to be able to retain instructor as a control variable in the posttest models. differences in online homework attempts among students who received the dpi curriculum, . % of students attempted no online homework assignments, . % attempted one homework assignment, . % attempted two assignments, and . % attempted all three online homework assignments. on average, students in the dpi sections attempted . assignments (sd =  . ). given different rates of engagement with the assignments, we included the number of assignments attempted in the posttest models. differences in general media literacy knowledge across both conditions, students demonstrated high general media literacy knowledge at pretest (magreement =  . , sd =  . ; maccuracy =  . %, sd =  . %). students’ agreement as assessed via the likert scale did not differ significantly by condition (mdpi =  . , sd =  . and mcontrol =  . , sd =  . ; t( ) =  . , p = . ). the accuracy of students’ knowledge also did not differ significantly by condition (mdpi =  . %, sd =  . % and mcontrol =  . %, sd =  . %; t( ) =  . , p = . ). see “appendix” for mean agreement and accuracy per question at pretest by condition. changes in lateral reading at posttest at posttest, students in dpi sections had an average score of m =  . (sd =  . ) across the four problems and received a score of on an average of . problems (sd =  . ). in contrast, students in control sections had an average score of m =  . (sd =  . ) and received a score of on an average of . problems (sd =  . ). to address our primary research question, we ran a mixed-effects ordinal logistic regression model with a logit link using the clmm function of the ordinal package (christensen, ) in r (r core team, ; rstudio team, ); see table . for each posttest problem, our ordinal dependent variable was the student’s score on the – scale from the dpi rubric. we included an intercept-only random effect for students. our fixed effects were media literacy knowledge at pretest, use of lateral reading to make a correct assessment at pretest, instructor, problem type, condition (dpi vs. control), and the number of online assignments attempted. table mixed-effects ordinal logistic regression model used to predict score for each problem on a scale of to (n =  )full size table overall, the full model with all fixed effects and the random effect of student fit significantly better than the null model with only the random effect of student (x ( ) =  . , p < . ). for each fixed effect, we compared the fit of the full model to the fit of the same model with the fixed effect excluded. this allowed us to determine whether including the fixed effect significantly improved model fit; see table for model comparisons. all control variables (i.e., media literacy knowledge at pretest, use of lateral reading to make a correct assessment at pretest, instructor, and problem type) significantly improved model fit or approached significance as predictors of students’ scores on lateral reading problems. controlling for all other variables, students in the dpi sections were more likely to score higher on lateral reading problems than students in the control sections. attempting more homework assignments was also significantly associated with higher scores. therefore, we dichotomized manual scores by recoding scores of as to indicate that the response provided evidence of lateral reading with a correct conclusion about the trustworthiness of the online content; all other scores were recoded as . we then re-ran the model above with the dichotomized version of the dependent variable to see whether findings differed. for each posttest problem, our dependent variable indicated whether or not students received a score of , i.e., whether they read laterally and also drew the correct conclusion about the trustworthiness of the online content. we used a mixed-effects logistic regression model with a binomial logit link using the glmer function of the lme package (bates et al., ) in r (r core team, ; rstudio team, ); see table . table mixed-effects logistic regression model used to predict use of lateral reading and correct trustworthiness conclusion on each problem (n =  )full size table overall, the full model with all fixed effects and the random effect of student fit significantly better than the null model with only the random effect of student (x ( ) =  . , p < . ). for each fixed effect, we again compared the fit of the full model to the fit of the same model with the fixed effect excluded; see table for model comparisons. all control variables except media knowledge at pretest significantly improved model fit, indicating that they were significant predictors of scoring , i.e., reading laterally and drawing a correct conclusion about trustworthiness. controlling for all other variables, students in the dpi sections were significantly more likely to receive a score of than students in the control sections. students who attempted more homework assignments were also significantly more likely to score . changes in self-reported lateral reading at posttest descriptive statistics for students’ self-reported use of lateral reading strategies at pretest and posttest are presented in table . at pretest, students in the control and dpi sections did not differ in the frequency with which they self-reported using lateral reading strategies when finding information online for school work, t( ) = – . , p = . . on average, students at pretest reported using lateral reading strategies between sometimes and often. table descriptive statistics for self-reported use of lateral reading strategies by time and condition (n =  ; ndpi =  , ncontrol =  )full size table to examine whether students who received the dpi curriculum were more likely to self-report use of lateral reading at posttest, as compared to controls, we conducted a  ×  repeated-measures anova with time (pretest vs. posttest) as a within-subject variable, condition (dpi vs. control) as a between-subject variable, and mean self-reported use of lateral reading as the dependent variable. there was a significant main effect of time, f( , ) =  . , p = . , ηp  =  . , with students reporting higher use of lateral reading at posttest (m =  . , sd =  . ) than at pretest (m =  . , sd =  . ). there was also a significant main effect of condition, f( , ) =  . , p = . , ηp  =  . , with students in the dpi sections reporting higher use of lateral reading (m =  . , sd =  . ) than students in the control sections (m =  . , sd =  . ). the interaction of time and condition was not significant, f( , ) =  . , p = . , ηp  =  . . changes in use of and trust of wikipedia at posttest descriptive statistics for students’ use of and trust of wikipedia at pretest and posttest are presented in table . since we used single items with ordinal scales to measure these variables, we used the nonparametric wilcoxon–mann whitney test to compare students’ use and trust of wikipedia across conditions at pretest and posttest (ucla statistical consulting group, n.d.). table percentage of students who indicated each response for use and trust of wikipedia by time and condition (n =  ; ndpi =  , ncontrol =  )full size table at pretest, students in dpi sections did not differ from students in control sections in their responses to the question how often do you use wikipedia to check whether you can trust information on the internet?, median =  (rarely) for both conditions, w =  . , p = . . however, at posttest, students in dpi sections reported using wikipedia more often to fact-check information (median =  , sometimes) as compared to controls (median =  , rarely), w =  . , p = . . at pretest, students in dpi and control sections did not differ in their responses to the question to what extent do you agree with the statement that “people should trust information on wikipedia”? median =  (disagree) for both conditions, w =  , p = . . at posttest, students in dpi sections tended to report a higher level of trusting information on wikipedia (median =  , no opinion) than students in the control sections (median =  , disagree), but the difference in trust was not significant, w =  . , p = . . individual differences in lateral reading for students in dpi sections to better understand individual differences in students’ responses to the dpi curriculum, we compared students who scored (i.e., used lateral reading and correctly assessed trustworthiness) on at least one problem at posttest (n =  or . % of students in dpi sections) with their peers who did not receive a score of on any of the lateral reading problems at posttest (n =  or . % of students in dpi sections). we first looked at group differences on whether or not students read laterally and drew the correct conclusion about the trustworthiness of the online content on at least one problem at pretest and on their self-reported use of lateral reading at pretest. groups did not differ in use of lateral reading on pretest problems or self-reported use of lateral reading at pretest. next, we examined whether groups differed in their general media literacy knowledge at pretest and their use and trust of wikipedia at pretest. there was no difference between groups in general media literacy knowledge (agreement and accuracy) at pretest or in their use of wikipedia at pretest. however, students in dpi sections who used lateral reading on at least one problem at posttest reported significantly lower trust of wikipedia at pretest (median =  , disagree) than students who failed to read laterally (median =  , no opinion, w =  , p = . ). lastly, we examined whether groups differed in the number of online homework assignments attempted. students in dpi sections who used lateral reading on at least one problem at posttest attempted more online homework assignments (m =  . , sd =  . ) than students who did not read laterally at posttest (m =  . , sd =  . , t( ) = – . , p = . ). discussion the current study examined the efficacy of the digital polarization initiative’s (dpi) curriculum to teach students fact-checking strategies used by professional fact-checkers. in particular, we examined whether students in sections that administered the curriculum showed greater use of lateral reading at posttest than “business-as-usual” controls. we also examined whether conditions differed in self-reported use of lateral reading and use and trust of wikipedia at posttest. additionally, to explore possible individual differences in student responses to the curriculum, we examined whether use of lateral reading to correctly assess the trustworthiness of online content at pretest, self-reported use of lateral reading at pretest, general media literacy knowledge at pretest, use of and trust of wikipedia at pretest, and number of online homework assignments attempted distinguished students who read laterally on at least one posttest problem from their classmates did not read laterally at posttest. at posttest, students who received the dpi curriculum were more likely to read laterally and accurately assess the trustworthiness of online content, as compared to their peers in the control classes. notably, there were no differences at pretest, as students almost universally lacked the skills prior to receiving the dpi curriculum. these findings are in keeping with previous work by mcgrew et al. ( ), showing that targeted instruction in civic online reasoning (including lateral reading) can improve college students’ use of these skills. we also observed that the number of online assignments attempted was associated with use of lateral reading at posttest, with students in dpi sections who read laterally on at least one problem at posttest attempting more online homework assignments than students in dpi sections who failed to read laterally at posttest. this correlation suggests that time devoted to practicing the skills was helpful in consolidating them. however, we cannot confirm that the homework was the critical factor as students who were more diligent with their homework may also have had better in-class attendance and participation or better comprehension skills. students who put more time or effort into the homework assignments may also have provided more written justifications on the posttest problems that could be scored using the dpi rubric (bråten et al.,  ). while . % of students read and accurately assessed at least one problem after receiving the dpi curriculum, students rarely received a score of on all four problems at posttest. this finding echoes previous research showing that, even when explicitly told that they can search online for information, adults, including college students, rarely do so (donovan & rapp, ). it is possible that students may have been more motivated to use lateral reading on certain problems based on their interest or how much they valued having accurate information on the topic (metzger, ; metzger & flanagin, ). it is also possible that, for problems that produced a strong emotional response, students may have struggled to “check their emotions” sufficiently to read laterally and draw a correct conclusion about the trustworthiness of the online content (berger & milkman, ). neither of these concerns would have emerged at pretest as students were almost uniformly unaware of lateral reading strategies. since the dpi curriculum was delivered in-class, students’ responsiveness to the dpi curriculum and their performance on the posttest may also have been affected by course-related factors. we observed an effect of instructors in the current study, which speaks to the importance of providing professional development and training for instructors teaching students lateral reading strategies. another course-related factor that we could not account for was students’ attendance during class sessions when the curriculum was taught. moving delivery of the dpi curriculum to an online format, e.g., by incorporating the instruction into the online homework assignments, may help ensure fidelity of implementation of the curriculum and facilitate better tracking of student participation and effort. on average, students answered the majority ( . %) of general media literacy knowledge items correctly at pretest. while general media literacy knowledge at pretest significantly predicted scores on the – scale at posttest, it was not a significant predictor of the dichotomized score distinguishing students who did and did not receive a score of (i.e., those who did vs. did not use lateral reading to draw correct conclusions about the trustworthiness of the online content). also, notably, students in dpi sections who received a score of on at least one problem at posttest did not differ in their media literacy knowledge from students in dpi sections who never scored . these findings suggest that understanding of persuasive intent and bias in media messages may have helped students recognize the need to investigate or assess the credibility of the information, but it was not sufficient to motivate them to use the fact-checking strategies to draw the correct conclusions. traditional media literacy instruction may also be too focused on the media message, rather than on the media environment (cohen, ). students may benefit from instruction that fosters understanding of how their online behaviors and features of the internet (e.g., use of algorithms to personalize search results) shape the specific media messages that appear in their information feeds. the need for additional instruction about the online information environment is also reflected in recent findings from jones-jang et al. ( ) documenting a significant association between information literacy knowledge (i.e., knowledge of how to find and evaluate online information) and the ability to identify fake news. in addition to examining students’ performance on the lateral reading problems, we also asked students to self-report their use of lateral reading (e.g., by checking information with another source or finding out more about the author of the information). at pretest, students in both conditions reported using lateral reading strategies between sometimes and often, even though very few students in either condition demonstrated lateral reading on any of the pretest problems. although students in the dpi sections self-reported greater use of lateral reading as compared to controls, the dpi students who read at least one problem laterally at posttest did not differ in their self-reported use of lateral reading strategies from dpi students who failed to read laterally at posttest. these findings align with the dissociation between students’ perceived and actual use of lateral reading skills observed in prior studies of students’ information evaluation strategies (brodsky et al., ; hargittai et al., ; list & alexander, ). the observed dissociation may be due to students’ lack of awareness and monitoring of the strategies they use when evaluating online information (kuhn, ). instruction should aim to foster students’ metastrategic awareness, as this may improve both the accuracy of their self-reported use of lateral reading and their actual use of lateral reading. several other explanations for this dissociation are also possible. some students may have accurately reported their use of lateral reading at posttest, but did not receive any scores of on the lateral reading problems because their trustworthiness assessments were all incorrect. alternatively, list and alexander ( ) suggest that the dissociation between students’ self-reported and observed behaviors may be due to self-report measures reflecting students’ self-efficacy and attitudes toward these behaviors or their prior success in evaluating the credibility of information, rather than their actual engagement in the target behaviors. overall, although performance-based measures may be more time-consuming and resource-intensive than self-report assessments (hobbs, ; list & alexander, ; mcgrew et al., ), they are necessary for gaining insight into students’ actual fact-checking habits. despite the emphasis of the dpi curriculum on using wikipedia to research sources and its popularity among professional fact-checkers (wineburg & mcgrew, ), students in the dpi sections only reported modestly higher wikipedia use at posttest as compared to controls, and no difference in trust. difficulties with changing students’ use and trust of wikipedia may reflect influences of prior experiences with secondary school teachers, librarians, and college instructors who considered wikipedia to be an unreliable source (garrison, ; konieczny, ; polk et al., ). while mcgrew et al. ( ) argue that students should be taught how to use wikipedia “wisely,” for example, by using the references in a wikipedia article as a jumping-off point for their lateral reading, this approach may require instructors teaching fact-checking skills to change their own perceptions of wikipedia and familiarize themselves with how wikipedia works. in future implementations, the dpi curriculum may benefit from incorporating strategies for conceptual change (lucariello & naff, ) to overcome instructors’ and students’ misconceptions about wikipedia. notably, our analysis of individual differences in response to the curriculum indicated that dpi students who demonstrated lateral reading at posttest were less trusting of information on wikipedia at pretest than their peers who failed to use lateral reading at posttest. this unexpected result suggests that the lateral reading strategies were more memorable for dpi students who initially held more negative views about trusting information on wikipedia, possibly because using wikipedia as part of the dpi curriculum may have induced cognitive conflict which can foster conceptual change (lucariello & naff, ). looking ahead, additional research is needed to parse out individual differences in students’ responses to the dpi curriculum. over a third of students did not read laterally on any of the problems at posttest, but this was unrelated to their use of lateral reading to correctly assess the trustworthiness of online content at pretest, their self-reported lateral reading at pretest or their self-reported use of wikipedia at pretest to check whether information should be trusted. given prior work on the roles of developmental and demographic variables, information literacy training, cognitive styles, and academic performance in children and adolescents’ awareness and practice of online information verification (metzger et al., ), it may be fruitful to examine the role of these variables in predicting students’ responsiveness to lateral reading instruction. in addition, students’ reading comprehension and vocabulary knowledge should be taken into consideration as language abilities may impact students’ success in verifying online content (brodsky et al., ). future research also needs to examine the extent to which gains in lateral reading are maintained over time and whether students use the strategies for fact-checking information outside of the classroom context. conclusion the current study, conducted with a diverse sample of college students, examined the efficacy of the dpi curriculum in teaching students to fact-check online information by reading laterally. compared to another study of college students’ online civic reasoning (mcgrew et al., ), we used a larger sample and a more intensive curriculum to teach students these skills. our findings indicate that the dpi curriculum increased students’ use of lateral reading to draw accurate assessments of the trustworthiness of online information. our findings also indicate the need for performance-based assessments of information verification skills as we observed that students overestimate the extent to which they actually engaged in lateral reading. the modest gains that students made in wikipedia use at posttest highlight an important challenge in teaching lateral reading as college students as well as instructors may hold misconceptions about the reliability of wikipedia and ways to use it as an information source (garrison, ; konieczny, ). lastly, the lack of relation between general media literacy knowledge and use of lateral reading to draw correct conclusions about trustworthiness of online information suggests that understanding and skepticism of media messages alone is not sufficient to motivate fact-checking. instead, teaching lateral reading as part of general education courses can help prepare students for navigating today’s complex media landscape by offering them a new set of skills. availability of data and materials the r markdown file, analysis code, and instructional materials used in the current study are available in the open science framework repository at https://osf.io/ rbkd/. notes .https://osf.io/ rbkd/. .only . % of the responses for the sourcing evidence problem in set b were scored due to missing data or responses stating that the youtube video was unavailable. .thirty-nine additional responses had clerical errors in the manual scoring that were corrected prior to reliability calculations. there were also responses that were either missing data or that stated that the youtube video was unavailable. these responses are not included in reliability calculations. .given that we expected components to be correlated, we used a direct oblimin rotation with kaiser normalization (costello & osborne, ). for the four components with eigenvalues greater than . , seven non-reverse scored items clustered on the first component, four reverse-scored items clustered on the second component, two non-reverse scored items clustered on the third component, and one reverse-scored item clustered on the fourth component. three items were below our criteria of . for the minimum factor loading (stevens, , as cited in field, ). .https://osf.io/ rbkd/. references amazeen, m. a. ( ). journalistic interventions: the structural factors affecting the global emergence of fact-checking. journalism, ( ), – . https://doi.org/ . / . article  google scholar  american democracy project (n.d.). digital polarization initiative. american association of state colleges and universities. retrieved june , , from https://www.aascu.org/academicaffairs/adp/digipo/. ashley, s., maksl, a., & craft, s. ( ). developing a news media literacy scale. journalism & mass communication educator, ( ), – . https://doi.org/ . / . article  google scholar  association of college & research libraries. ( ). framework for information literacy for higher education. chicago: association of college & research libraries. retrieved march , , from http://www.ala.org/acrl/files/issues/infolit/framework.pdf. bates, d., maechler, m., bolker, b., & walker, s. ( ). lme : linear mixed-effects models using eigen and s . cran r package, ( ), – . google scholar  berger, j., & milkman, k. l. ( ). what makes online content viral? journal of marketing research, ( ), – . https://doi.org/ . /jmr. . . article  google scholar  blakeslee, s. ( ). the craap test. loex quarterly, ( ), – . google scholar  bråten, i., brante, e. w., & strømsø, h. i. ( ). what really matters: the role of behavioural engagement in multiple document literacy tasks. journal of research in reading, ( ), – . https://doi.org/ . / - . . article  google scholar  brante, e. w., & strømsø, h. i. ( ). sourcing in text comprehension: a review of interventions targeting sourcing skills. educational psychology review, , – . https://doi.org/ . /s - - - . article  google scholar  brodsky, j. e., barshaba, c. n., lodhi, a. k. & brooks, p. j. ( ). dissociations between college students' media literacy knowledge and fact-checking skills [paper session]. aera annual meeting san francisco, ca. retrieved march , , from http://tinyurl.com/saedj t (conference canceled). caulfield, m. ( a). web literacy for student fact-checkers...and other people who care about facts. pressbooks. retrieved march , , from https://webliteracy.pressbooks.com/. caulfield, m. ( b). four moves: adventures in fact-checking for students. retrieved march , , from https://fourmoves.blog/. caulfield. ( a). greyhound border patrol. four moves: adventures in fact-checking for students. retrieved march , , from https://fourmoves.blog/ / / /greyhound-border-patrol/. caulfield. ( b). detained by ice? four moves: adventures in fact-checking for students. retrieved march , , from https://fourmoves.blog/ / / /detained-by-ice/. caulfield. ( c). immigration crime wave? four moves: adventures in fact-checking for students. retrieved march , , from https://fourmoves.blog/ / / /immigration-crime-wave/. chen, s., & chaiken, s. ( ). the heuristic-systematic model in its broader context. in s. chaiken & y. trope (eds.), dual-process theories in social psychology. (pp. – ). guilford press. google scholar  christensen, r. h. b. ( ). ordinal—regression models for ordinal data. r package version . - . retrieved march , , from https://cran.r-project.org/package=ordinal. cohen, j. ( ). weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. psychological bulletin, ( ), – . https://doi.org/ . /h . article  pubmed  google scholar  cohen, j. n. ( ). exploring echo-systems: how algorithms shape immersive media environments. journal of media literacy education, ( ), – . https://doi.org/ . /jmle- - - - . article  google scholar  costello, a. b., & osborne, j. ( ). best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. practical assessment, research & evaluation, , – . google scholar  donovan, a. m., & rapp, d. n. ( ). look it up: online search reduces the problematic effects of exposures to inaccuracies. memory and cognition, , – . https://doi.org/ . /s - - -z. article  pubmed  google scholar  faix, a., & fyn, a. ( ). framing fake news: misinformation and the acrl framework. portal: libraries and the academy, ( ), – . https://doi.org/ . /pla. . . article  google scholar  field, a. ( ). discovering statistics using spss (and sex and drugs and rock’n’roll). ( rd ed.). sage. google scholar  garrison, j. c. ( ). instructor and peer influence on college student use and perceptions of wikipedia. the electronic library, ( ), – . https://doi.org/ . /el- - - . article  google scholar  graves, l. ( ). anatomy of a fact check: objective practice and the contested epistemology of fact checking. communication, culture and critique, ( ), – . https://doi.org/ . /cccr. . article  google scholar  graves, l., & amazeen, m. ( ). fact-checking as idea and practice in journalism. oxford university press. https://doi.org/ . /acrefore/ . . . book  google scholar  hargittai, e., fullerton, l., menchen-trevino, e., & thomas, k. y. ( ). trust online: young adults’ evaluation of web content. international journal of communication, , – . google scholar  head, a. j., & eisenberg, m. b. ( ). how today’s college students use wikipedia for course-related research. first monday. https://doi.org/ . /fm.v i . . article  google scholar  hobbs, r. ( ). digital and media literacy: a plan of action. the aspen institute. retrieved march , , from https://eric.ed.gov/?id=ed . hobbs, r. ( ). measuring the digital and media literacy competencies of children and teens. in f. c. blumberg & p. j. brooks (eds.), cognitive development in digital contexts. (pp. – ). elsevier. https://doi.org/ . /b - - - - . - . chapter  google scholar  hobbs, r., & jensen, a. ( ). the past, present, and future of media literacy education. journal of media literacy education, ( ), – . google scholar  jeong, s. h., cho, h., & hwang, y. ( ). media literacy interventions: a meta-analytic review. journal of communication, ( ), – . https://doi.org/ . /j. - . . .x. article  google scholar  jones-jang, s. m., mortensen, t., & liu, j. ( ). does media literacy help identification of fake news? information literacy helps, but other literacies don’t. american behavioral scientist. https://doi.org/ . / . article  google scholar  koltay, t. ( ). the media and the literacies: media literacy, information literacy, digital literacy. media, culture & society, ( ), – . https://doi.org/ . / . article  google scholar  konieczny, p. ( ). teaching with wikipedia in a st-century classroom: perceptions of wikipedia and its educational benefits. journal of the association for information science and technology, ( ), – . https://doi.org/ . /asi. . article  google scholar  kuhn, d. ( ). a developmental model of critical thinking. educational researcher, ( ), – . https://doi.org/ . / x . article  google scholar  list, a., & alexander, p. a. ( ). corroborating students’ self-reports of source evaluation. behaviour & information technology, ( ), – . https://doi.org/ . / x. . . article  google scholar  list, a., grossnickle, e. m., & alexander, p. a. ( ). undergraduate students’ justifications for source selection in a digital academic context. journal of educational computing research, ( ), – . https://doi.org/ . / . article  google scholar  lucariello, j. & naff, d. ( ). how do i get my students over their alternative conceptions (misconceptions) for learning. american psychological association. retrieved march , , from http://www.apa.org/education/k /misconceptions. maksl, a., craft, s., ashley, s., & miller, d. ( ). the usefulness of a news media literacy measure in evaluating a news literacy curriculum. journalism & mass communication educator, ( ), – . https://doi.org/ . / . article  google scholar  mcgrew, s., breakstone, j., ortega, t., smith, m., & wineburg, s. ( ). can students evaluate online sources? learning from assessments of civic online reasoning. theory & research in social education, ( ), – . https://doi.org/ . / . . . article  google scholar  mcgrew, s., ortega, t., breakstone, s., & wineburg, s. ( ). the challenge that’s bigger than fake news: civic reasoning in a social media environment. american educator, ( ), – . google scholar  mcgrew, s., smith, m., breakstone, j., ortega, t., & wineburg, s. ( ). improving university students’ web savvy: an intervention study. british journal of educational psychology, ( ), – . https://doi.org/ . /bjep. . article  google scholar  meola, m. ( ). chucking the checklist: a contextual approach to teaching undergraduates web-site evaluation. portal: libraries and the academy, ( ), – . https://doi.org/ . /pla. . . article  google scholar  metzger, m. j. ( ). making sense of credibility on the web: models for evaluating online information and recommendations for future research. journal of the american society for information science and technology, , – . https://doi.org/ . /asi. . article  google scholar  metzger, m. j., & flanagin, a. j. ( ). psychological approaches to credibility assessment online. in s. s. sundar (ed.), the handbook of the psychology of communication technology. (pp. – ). wiley. https://doi.org/ . / .ch . chapter  google scholar  metzger, m. j., flanagin, a. j., markov, a., grossman, r., & bulger, m. ( ). believing the unbelievable: understanding young people’s information literacy beliefs and practices in the united states. journal of children and media, ( ), – . https://doi.org/ . / . . . article  google scholar  musgrove, a. t., powers, j. r., rebar, l. c., & musgrove, j. g. ( ). real or fake? resources for teaching college students how to identify fake news. college & undergraduate libraries, ( ), – . https://doi.org/ . / . . . article  google scholar  pennycook, g., cannon, t. d., & rand, d. g. ( ). prior exposure increases perceived accuracy of fake news. journal of experimental psychology: general, ( ), – . https://doi.org/ . /xge . article  google scholar  pew research center. ( a). internet/broadband fact sheet [fact sheet]. retrieved march , , from https://www.pewresearch.org/internet/fact-sheet/internet-broadband/#who-uses-the-internet. pew research center. ( b). social media fact sheet [fact sheet]. retrieved march , , from https://www.pewresearch.org/internet/fact-sheet/social-media/. polk, t., johnston, m. p., & evers, s. ( ). wikipedia use in research: perceptions in secondary schools. techtrends, , – . https://doi.org/ . /s - - - . article  google scholar  powers, k. l., brodsky, j. e., blumberg, f. c., & brooks, p. j. ( ). creating developmentally-appropriate measures of media literacy for adolescents. in proceedings of the technology, mind, and society (techmindsociety’ ) (pp. – ). association for computing machinery. https://doi.org/ . / . r core team. ( ). r: a language and environment for statistical computing. r foundation for statistical computing. retrieved march , , from https://www.r-project.org. rstudio team. ( ). rstudio: integrated development for r. boston, ma: rstudio inc. retrieved march , , from http://www.rstudio.com/. stanford history education group (n.d.). civic online reasoning. https://cor.stanford.edu/. ucla statistical consulting group (n.d.). choosing the correct statistical test in sas, spss, and r. retrieved december , from https://stats.idre.ucla.edu/other/mult-pkg/whatstat/. vosoughi, s., roy, d., & aral, s. ( ). the spread of true and false news online. science, ( ), – . https://doi.org/ . /science.aap . article  google scholar  wiley, j., goldman, s. r., graesser, a. c., sanchez, c. a., ash, i. k., & hemmerich, j. a. ( ). source evaluation, comprehension, and learning in internet science inquiry tasks. american educational research journal, ( ), – . https://doi.org/ . / . article  google scholar  wineburg, s., & mcgrew, s. ( ). lateral reading: reading less and learning more when evaluating digital information (stanford history education group working paper no. -a ). retrieved march , , from https://ssrn.com/abstract= . wineburg s. & mcgrew, s. ( ). lateral reading and the nature of expertise: reading less and learning more when evaluating digital information (stanford graduate school of education open archive). retrieved march , from https://searchworks.stanford.edu/view/yk ht . wineburg, s., breakstone, j., ziv, n., & smith, m. ( ). educating for misunderstanding: how approaches to teaching digital literacy make students susceptible to scammers, rogues, bad actors, and hate mongers (stanford history education group working paper no. a- ). retrieved march , , from https://purl.stanford.edu/mf bt . download references acknowledgements we thank jay verkuilen of the graduate center, city university of new york, for statistical consultation. preliminary results were presented at the aps-stp teaching institute at the annual convention of the association for psychological science held in may and at the american psychological association’s technology, mind, and society conference held in october . funding the authors have no sources of funding to declare. author information affiliations the graduate center, cuny, th ave, new york, ny, , usa jessica e. brodsky & patricia j. brooks the college of staten island, cuny, victory blvd, staten island, ny, , usa jessica e. brodsky, patricia j. brooks, donna scimeca, peter galati, michael batson, robert grosso, michael matthews & victor miller lehman college, cuny, bedford park boulevard west, bronx, ny, , usa ralitsa todorova washington state university vancouver, ne salmon creek ave, vancouver, wa, , usa michael caulfield authors jessica e. brodskyview author publications you can also search for this author in pubmed google scholar patricia j. brooksview author publications you can also search for this author in pubmed google scholar donna scimecaview author publications you can also search for this author in pubmed google scholar ralitsa todorovaview author publications you can also search for this author in pubmed google scholar peter galatiview author publications you can also search for this author in pubmed google scholar michael batsonview author publications you can also search for this author in pubmed google scholar robert grossoview author publications you can also search for this author in pubmed google scholar michael matthewsview author publications you can also search for this author in pubmed google scholar victor millerview author publications you can also search for this author in pubmed google scholar michael caulfieldview author publications you can also search for this author in pubmed google scholar contributions jeb and pjb prepared the online homework assignments, analyzed the data, and prepared the manuscript. jeb and rt coded students’ open responses for use of lateral reading. ds, pg, mb, rg, mm, and vm contributed to the design of the online homework assignments and implemented the dpi curriculum in their course sections. mc developed the in-class instructional materials and the lateral reading problems. all authors read and approved the final manuscript. corresponding author correspondence to jessica e. brodsky. ethics declarations ethics approval and consent to participate the research protocol was classified as exempt by the university’s institutional review board. consent for publication not applicable. competing interests the authors declare that they have no competing interests. additional information publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. appendix appendix percentage of students with accurate media literacy knowledge by item and condition at pretest (n =  ; ndpi =  , ncontrol =  ). item agreement accuracy dpi m (sd) control m (sd) dpi m (sd) control m (sd) a news story that has good pictures is less likely to get published. (reverse-scored) . ( . ) . ( . ) . % ( . ) . % ( . ) people who advertise think very carefully about the people they want to buy their product . ( . ) . ( . ) . % ( . ) . % ( . ) when you see something on the internet the creator is trying to convince you to agree with their point of view . ( . ) . ( . ) . % ( . ) . % ( . ) people are influenced by news whether they realize it or not . ( . ) . ( . ) . % ( . ) . % ( . ) two people might see the same news story and get different information from it . ( . ) . ( . ) . % ( . ) . % ( . ) photographs your friends post on social media are an accurate representation of what is going on in their life. (reverse-scored) . ( . ) . ( . ) . % ( . ) . % ( . ) people pay less attention to news that fits with their beliefs than news that doesn’t. (reverse-scored) . ( . ) . ( . ) . % ( . ) . % ( . ) advertisements usually leave out a lot of important information . ( . ) . ( . ) . % ( . ) . % ( . ) news makers select images and music to influence what people think . ( . ) . ( . ) . % ( . ) . % ( . ) sending a document or picture to one friend on the internet means no one else will ever see it. (reverse-scored) . ( . ) . ( . ) . % ( . ) . % ( . ) individuals can find news sources that reflect their own political values . ( . ) . ( . ) . % ( . ) . % ( . ) a reporter’s job is to tell the trutha . ( . ) . ( . ) . % ( . ) . % ( . ) news companies choose stories based on what will attract the biggest audience . ( . ) . ( . ) . % ( . ) . % ( . ) when you see something on the internet you should always believe that it is true. (reverse-scored) . ( . ) . ( . ) . % ( . ) . % ( . ) two people may see the same movie or tv show and get very different ideas about it . ( . ) . ( . ) . % ( . ) . % ( . ) news coverage of a political candidate does not influence people’s opinions. (reverse-scored) . . ) . ( . ) . % ( . ) . % ( . ) people are influenced by advertisements, whether they realize it or not . ( . ) . ( . ) . % ( . ) . % ( . ) movies and tv shows don’t usually show life like it really is . ( . ) . ( . ) . % ( . ) . % ( . ) overall mean ( items) . ( . ) . ( . ) . % ( . ) . % ( . ) all agreement scores are on a scale of  = strongly disagree to  = strongly agree. items were reverse-scored prior to calculating overall means and standard deviations aitem removed due to low item-rest correlation rights and permissions open access this article is licensed under a creative commons attribution . international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creativecommons.org/licenses/by/ . /. reprints and permissions about this article cite this article brodsky, j.e., brooks, p.j., scimeca, d. et al. improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course. cogn. research , ( ). https://doi.org/ . /s - - - download citation received: june accepted: march published: march doi: https://doi.org/ . /s - - - keywords fact-checking instruction lateral reading media literacy wikipedia college students download pdf associated content collection the psychology of fake news advertisement support and contact jobs language editing for authors scientific editing for authors leave feedback terms and conditions privacy statement accessibility cookies follow springeropen springeropen twitter page springeropen facebook page by using this website, you agree to our terms and conditions, california privacy statement, privacy statement and cookies policy. manage cookies/do not sell my data we use in the preference centre. © biomed central ltd unless otherwise stated. part of springer nature. etsu inagaki sugimoto - wikipedia etsu inagaki sugimoto from wikipedia, the free encyclopedia jump to navigation jump to search this article needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. find sources: "etsu inagaki sugimoto" – news · newspapers · books · scholar · jstor (september ) (learn how and when to remove this template message) a photograph of sugimoto from a daughter of the samurai, published etsuko sugimoto (杉本 鉞子, sugimoto etsuko, – june , ), also known as etsu inagaki sugimoto, was a japanese american autobiographer and novelist.[ ] she was born nagaoka in echigo province (which means "behind the mountains")[ ] in japan, now part of niigata prefecture. her father had once been a high-ranking samurai official in nagaoka, but with the breakdown of the feudal system shortly before her birth, the economic situation of her family took a turn for the worse. although originally destined to be a priestess, she became engaged, through an arranged marriage, to a japanese merchant living in cincinnati, ohio. etsu attended a methodist school in tokyo in preparation for her life in the usa, and became a christian. in she journeyed to the usa, where she married her fiancé and became mother of two daughters. after her husband's death she returned to japan, but later went back to the united states to complete the education of her daughters there. later she lived in new york city, where she turned to literature and taught japanese language, culture and history at columbia university. she also wrote for newspapers and magazines. she died in . works[edit] sugimoto's signature a daughter of the samurai ( ) with taro and hana in japan (in cooperation with nancy virginia austen - - ) a daughter of the narikin ( ) in memoriam: florence mills wilson ( ) a daughter of the nohfu ( ) grandmother o kyo ( ) but the ships are sailing ( , by etsu's daughter chiyono sugimoto kiyooka; the work contains biographical details of the last years of etsu sugimoto's life) references[edit] ^ huang, guiyou ( ). asian american autobiographers: a bio-bibliographical critical sourcebook. greenwood. p.  . isbn  . ^ sugimoto, etsu ( ). a daughter of the samurai . p.  . isbn  – via wikisource. authority control general integrated authority file (germany) isni viaf worldcat national libraries united states japan czech republic netherlands poland scientific databases cinii (japan) other faceted application of subject terminology social networks and archival context sudoc (france) this article about a japanese writer, poet, or screenwriter is a stub. you can help wikipedia by expanding it. v t e this asian american–related article is a stub. you can help wikipedia by expanding it. v t e retrieved from "https://en.wikipedia.org/w/index.php?title=etsu_inagaki_sugimoto&oldid= " categories: japanese women writers th-century japanese novelists births deaths th-century american novelists american women writers american writers of japanese descent english-language writers from japan japanese emigrants to the united states american novelists of asian descent american women novelists columbia university faculty th-century american women writers people from nagaoka, niigata american women academics converts to christianity japanese writer stubs asian american stubs hidden categories: articles needing additional references from september all articles needing additional references articles containing japanese-language text wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with ndl identifiers wikipedia articles with nkc identifiers wikipedia articles with nta identifiers wikipedia articles with plwabn identifiers wikipedia articles with cinii identifiers wikipedia articles with fast identifiers wikipedia articles with snac-id identifiers wikipedia articles with sudoc identifiers wikipedia articles with worldcatid identifiers all stub articles navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons wikisource languages العربية deutsch français 日本語 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement tether criminally investigated by justice department — when the music stops podcast – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu tether criminally investigated by justice department — when the music stops podcast th july - by david gerard - leave a comment number go up! because there’s trouble at tether. specifically: [bloomberg] a u.s. probe into tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case … the justice department investigation is focused on conduct that occurred years ago, when tether was in its more nascent stages. specifically, federal prosecutors are scrutinizing whether tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential. that’s the entire new information in the story. we don’t know precisely what “years ago” means here — but i’d be surprised if the new york attorney general didn’t helpfully supply a pile of information from their recently-concluded investigation. remember that bitfinex/tether were lying to their banks all through and , with banks kicking them off as soon as they found out their customer was ifinex. this week’s “number go up” happened several hours before the report broke — likely when the bloomberg reporter contacted tether for comment. btc/usd futures on binance spiked to $ , , and the btc/usd price on coinbase spiked at $ , shortly after. here’s the one-minute candles on coinbase btc/usd around : utc ( am bst on this chart) on july — the price went up $ , in three minutes. you’ve never seen something this majestically organic:     janet yellen, the secretary of the treasury, met on july with the presidential oh-sh*t working group of regulators to talk about “stablecoins.” the meeting was closed-door, but a report has leaked. they’re not happy about libra/diem-style plans, or about tether: [bloomberg] acting comptroller of the currency michael hsu said regulators are scrutinizing tether’s stockpile of commercial paper to see whether it fulfills the company’s pledge that each token is backed by the equivalent of one u.s. dollar. amy castor wrote up the present saga for her blog, [amy castor] and we both went on aviv milner’s podcast when the music stops to talk about tether’s new round of troubles. it’s minutes. [anchor.fm]     your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedamy castorbitcoinbitfinexpodcasttetherwhen the music stops post navigation previous article news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc next article news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. chia consensus - google docs javascript isn't enabled in your browser, so this file can't be opened. enable and reload. chia consensus        share sign in the version of the browser you are using is no longer supported. please upgrade to a supported browser.dismiss file edit view tools help accessibility debug see new changes library tech talk blog | u-m library skip to main content log in library tech talk technology innovations and project updates from the u-m library i.t. division search library tech talk subscribe to rss feed get updates via email (u-m only) popular posts for library tech talk digital collections completed july - june library it services portfolio keys to a dazzling library website redesign sweet sixteen: digital collections completed july - june adding ordered metadata fields to samvera hyrax tags in library tech talk hathitrust library website mlibrary labs dlxs web content strategy digital collections mirlyn digitization search design mtagger oai accessibility usability group ux archive for library tech talk show july ( ) show october ( ) september ( ) august ( ) july ( ) june ( ) april ( ) march ( ) january ( ) show october ( ) june ( ) april ( ) february ( ) january ( ) show older show december ( ) november ( ) september ( ) july ( ) april ( ) february ( ) show november ( ) september ( ) august ( ) june ( ) april ( ) march ( ) february ( ) january ( ) show december ( ) november ( ) august ( ) june ( ) april ( ) march ( ) february ( ) january ( ) show december ( ) november ( ) october ( ) september ( ) july ( ) june ( ) may ( ) april ( ) march ( ) february ( ) january ( ) show december ( ) november ( ) october ( ) september ( ) august ( ) july ( ) june ( ) show december ( ) october ( ) september ( ) april ( ) march ( ) january ( ) show august ( ) july ( ) june ( ) may ( ) show december ( ) november ( ) september ( ) july ( ) may ( ) april ( ) march ( ) show december ( ) october ( ) september ( ) august ( ) july ( ) may ( ) february ( ) january ( ) show december ( ) november ( ) october ( ) september ( ) august ( ) july ( ) june ( ) may ( ) digital collections completed july - june digital content & collections (dcc) relies on content and subject experts to bring us new digital collections.from july to jun , our digital collections received . million views. during the pandemic, when there was an increased need for digital resources, usage of the digital collections jumped to . million views (july -jun ) and million views (july -june ). thank you to the many people, too numerous to reasonably list here, who are involved not just in the... july , see all posts by lauren havens library it services portfolio academic library service portfolios are mostly a mix of big to small strategic initiatives and tactical projects. systems developed in the past can become a durable bedrock of workflows and services around the library, remaining relevant and needed for five, ten, and sometimes as long as twenty years. there is, of course, never enough time and resources to do everything. the challenge faced by library it divisions is to balance the tension of sustaining these legacy systems while continuing to... october , see all posts by nabeela jaffer keys to a dazzling library website redesign the u-m library launched a completely new primary website in july after years of work. the redesign project team focused on building a strong team, internal communication, content strategy, and practicing needs informed design and development to make the project a success. september , see all posts by heidi steiner burkhardt sweet sixteen: digital collections completed july - june digital content & collections (dcc) relies on content and subject experts to bring us new digital collections. this year, digital collections were created or significantly enhanced. here you will find links to videos and articles by the subject experts speaking in their own words about the digital collections they were involved in and why they found it so important to engage in this work with us. thank you to all of the people involved in each of these digital collections! august , see all posts by lauren havens adding ordered metadata fields to samvera hyrax how to add ordered metadata fields in samvera hyrax. includes example code and links to actual code. july , see all posts by fritz freiheit sinking our teeth into metadata improvement like many attempts at revisiting older materials, working with a couple dozen volumes of dental pamphlets started very simply but ended up being an interesting opportunity to explore the challenges of making the diverse range of materials held in libraries accessible to patrons in a digital environment. and while improving metadata may not sound glamorous, having sufficient metadata for users to be able to find what they are looking for is essential for the utility of digital libraries. june , see all posts by jackson huang collaboration and generosity provide the missing issue of the american jewess what started with a bit of wondering and conversation within our unit of the library led to my reaching out to princeton university with a request but no expectations of having that request fulfilled. individuals at princeton, however, considered the request and agreed to provide us with the single issue of the american jewess that we needed to complete the full run of the periodical within our digital collection. especially in these stressful times, we are delighted to bring you a positive... june , see all posts by lauren havens pager page of … older posts library contact information university of michigan library hatcher graduate library south, s. university avenue ann arbor, mi - ( ) - | contact-mlibrary@umich.edu except where otherwise noted, this work is subject to a creative commons attribution . license. for details and exceptions, see the library copyright policy. © , regents of the university of michigan man and maid - wikipedia man and maid from wikipedia, the free encyclopedia jump to navigation jump to search film man and maid suzette (renée adorée) makes the tedious hours of the wounded sir nicholas thormonde (lew cody) seem less monotonous. directed by victor schertzinger written by elinor glyn starring lew cody renée adorée harriet hammond cinematography chester a. lyons distributed by metro-goldwyn-mayer release date april  ,   ( - - ) running time minutes country united states languages silent english intertitles man and maid is a lost[ ] drama film directed by victor schertzinger based on a novel by elinor glyn. the film stars lew cody, renée adorée and harriet hammond.[ ][ ] contents plot cast references external links plot[edit] boulevardier sir nicholas thormonde (lew cody) has to choose between his mistress suzette (renée adorée) and his virtuous secretary alathea (harriet hammond) in wartime paris. cast[edit] lew cody - sir nicholas thormonde renée adorée - suzette harriet hammond - alathea bulteel paulette duval - coralie alec b. francis - burton crauford kent - col. george harcourt david mir - maurice jacqueline gadsden - lady hilda buiteel winston miller - little bobby jane mercer - little hilda irving hartley - atwood chester references[edit] ^ the library of congress american silent feature film survival catalog:man and maid ^ man and maid at silentera.com ^ the afi catalog of feature films:man and maid external links[edit] wikimedia commons has media related to man and maid. man and maid at imdb man and maid review at allmovie guide man and maid at the tcm movie database v t e films directed by victor schertzinger the pinch hitter ( ) the millionaire vagrant ( ) the clodhopper ( ) sudden jim ( ) the son of his father ( ) his mother's boy ( ) the hired man ( ) the family skeleton ( ) playing the game ( ) his own home town ( ) the claws of the hun ( ) a nine o'clock town ( ) coals of fire ( ) quicksand ( ) string beans ( ) hard boiled ( ) extravagance ( ) the sheriff's son ( ) the homebreaker ( ) the lady of red butte ( ) when doctors disagree ( ) other men's wives ( ) upstairs ( ) jinx ( ) pinto ( ) the blooming angel ( ) the slim princess ( ) what happened to rosa ( ) the concert ( ) beating the game ( ) made in heaven ( ) the bootlegger's daughter ( ) the kingdom within ( ) head over heels ( ) mr. barnes of new york ( ) long live the king ( ) dollar devils ( ) refuge ( ) the lonely road ( ) the man next door ( ) the scarlet lily ( ) the man life passed by ( ) chastity ( ) bread ( ) a boy of flanders ( ) man and maid ( ) thunder mountain ( ) the golden strain ( ) flaming love ( ) the wheel ( ) the lily ( ) siberia ( ) the return of peter grimm ( ) stage madness ( ) the heart of salome ( ) the secret studio ( ) the showdown ( ) forgotten faces ( ) redskin ( ) nothing but the truth ( ) fashions in love ( ) the wheel of life ( ) the laughing lady ( ) paramount on parade ( ) safety in numbers ( ) heads up ( ) the woman between ( ) friends and lovers ( ) strange justice ( ) uptown new york ( ) the constant woman ( ) cocktail hour ( ) my woman ( ) one night of love ( ) beloved ( ) let's live tonight ( ) love me forever ( ) the music goes 'round ( ) something to sing about ( ) the mikado ( ) road to singapore ( ) rhythm on the river ( ) road to zanzibar ( ) kiss the boys goodbye ( ) birth of the blues ( ) the fleet's in ( ) this article about a silent drama film from the s is a stub. you can help wikipedia by expanding it. v t e retrieved from "https://en.wikipedia.org/w/index.php?title=man_and_maid&oldid= " categories: films american films metro-goldwyn-mayer films american silent feature films drama films lost american films american black-and-white films american drama films films directed by victor schertzinger lost drama films lost films s silent drama film stubs hidden categories: articles with short description short description is different from wikidata use mdy dates from september template film date with release date commons category link from wikidata all stub articles navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages català فارسی italiano nederlands português edit links this page was last edited on february , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement the origin of ‘number go up’ in bitcoin culture – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu the origin of ‘number go up’ in bitcoin culture th may rd march - by david gerard - comments. this post is ridiculously popular, for unclear reasons. please check out the news blog, buy my book — it got good reviews! — or sponsor the site! you can also sign up (for free) at the “get blog posts by email” box at your right! bitcoin believers — often the same bitcoin believers — will say whatever they think is good news for bitcoin at that moment. from anarcho-capitalism and replacing the international bankers, to stablecoins and institutional investors. because “number go up” is the only consistent bitcoin ideology. the phrase itself — without a “the,” for ungrammatical effect — has long been used wherever goodhart’s law applies — “when a measure becomes a target, it ceases to be a good measure.” but who first applied “number go up” to cryptocurrency? this came up in a twitter discussion today, when it was mentioned by chris derose. so i went looking. the earliest “number go up” i’ve found in crypto discussion is from somethingawful — not the buttcoin thread, but one of the other somethingawful crypto threads at the time — in a comment by user blade runner, on october (archive), as the crypto bubble was in full swing: it’s legitimately amusing to see a ham shoe shine boy incessantly screeching with no knowledge of how buttcoin works beyond “number go up, me could have had big number, goon bad” there are many reasons that it’s a terrible investment, and these have been explained over and over again, and yet the man continues to screech while pointing at an imaginary number inflated by ridiculous amounts of margin trading on an exchange that can’t be withdrawn from it was promptly adopted both on that thread and the main buttcoin thread. particularly by me. the earliest crypto twitter “number go up” without a “the” that i can find is from @vexingcrypto talking about their waltoncoins, on november . though it wasn’t in the present sense:   i just locked in an order for $wtc on #binance at . and i'm not changing it as long as $btc is going up. lock your number now! change order when $btc drops then your shares for same number go up. we will see what happens. #waltonchain #bitcoin — vexing crypto 💦 (@vexingcrypto) november ,   the first usage i can find on twitter in the present usage seems to be from me, on december , about proof-of-work crypto mining:   pow is a kludge, a terrible one. it was literally never an actually good idea. and bitcoin had substantially recentralised after five years anyway. now it's just a complete waste. and as other coins have shown, nobody actually cares about % as long as number go up. — david gerard 🐍👑 🌷 (@davidgerard) december ,   @buttcoin used it the next day, about iota:   because number go up look,, i don't think you,,, really understand,,,, the blockchain,,,,, https://t.co/dsxfdengng — buttcoin (@buttcoin) december ,   since then, the number of “number go up” continue only go up. bullish! to the moon!! your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbitcoinbuttcoinnumber go upsomethingawful post navigation previous article woolf, the university on the blockchain — or not next article news: localbitcoins less local, cryptopia, quadrigacx, crypto banking, telegram, wirecard, eos voice, kik and defund crypto, wright gets a copyright competitor comments on “the origin of ‘number go up’ in bitcoin culture” ranulph flambard says: th may at : am numbers go up is good. numbers go down is bad. simples. reply satoshi nakamoto says: th may at : pm fly away goon, all the way back to gbs. reply david gerard says: th may at : pm as a cultured denizen of pyf, c-spam and yospos, i, er, never mind. reply chris derose says: rd june at : pm love it. the unironic adoption by the bitcoin community, of this mantra, is hilarious. reply david gerard says: rd june at : pm numeralis ascendus! reply flibbertygibbet says: th may at : am “this post is ridiculously popular, for unclear reasons.” it’s the top google result for “number go up”. you somehow stumbled into seo. reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. team america: world police - wikipedia team america: world police from wikipedia, the free encyclopedia jump to navigation jump to search film directed by trey parker team america: world police theatrical release poster directed by trey parker written by trey parker matt stone pam brady produced by scott rudin trey parker matt stone starring trey parker matt stone kristen miller masasa daran norris phil hendrie maurice lamarche chelsea marguerite jeremy shada fred tatasciore cinematography bill pope edited by thomas m. vogt music by harry gregson-williams production companies scott rudin productions braniff productions distributed by paramount pictures release date october  ,   ( - - ) (denver film festival) october  ,   ( - - ) (united states) running time minutes[ ][ ] countries germany united states[ ][ ] language english budget $ million[ ] box office $ million[ ] team america: world police is a black comedy action film directed by trey parker and written by parker, matt stone and pam brady, all of whom are also known for the popular animated television series south park. starring parker, stone, kristen miller, masasa, daran norris, phil hendrie, maurice lamarche, chelsea marguerite, jeremy shada and fred tatasciore, the film is a satire of big-budget action films and their associated clichés and stereotypes, with particular humorous emphasis on the global implications of the politics of the united states. the title is derived from domestic and international political criticisms that the foreign policy of the united states frequently and unilaterally tries to "police the world". team america follows the fictional titular paramilitary police force and their recruitment of a broadway actor in an attempt to save the world from north korean dictator kim jong-il, who is leading a conspiracy of islamic terrorists and liberal hollywood actors in a bid for global destruction. instead of live actors, the film uses a style of puppetry based on supermarionation, known for its use in the british tv series thunderbirds, although stone and parker were not fans of that show. the duo worked on the script with former south park writer brady for nearly two years. the film had a troubled time in production, with various problems regarding the marionettes, as well as the scheduling extremes of having the film come out in time. in addition, the filmmakers fought with the motion picture association of america, who returned the film over nine times with an nc- rating due to an explicit sex scene in the film. the film was cut by less than a minute and rated r for "graphic crude and sexual humor, violent images and strong language – all involving puppets". the film premiered at the denver film festival on october , , and was released theatrically in the united states the following day on october , , by paramount pictures. it has received mostly positive reviews from critics and grossed over $ million worldwide against its $ million budget.[ ] contents plot cast production . development . writing . filming . editing . music individuals parodied in the film release . home media reception . critical response . box office . filmmakers' response . awards soundtrack legacy see also references further reading external links plot[edit] team america, a paramilitary counter-terrorist police force, eliminates a gang of terrorists in paris, accidentally destroying the eiffel tower, arc de triomphe, and the louvre in the process. the team includes lisa, an idealistic psychologist; her love interest carson; sarah, a psychic; joe, a jock who is in love with sarah; and chris, a martial arts expert who harbors a phobia towards actors. carson proposes to lisa, but a terrorist shoots him dead as he is doing it. team america leader spottswoode brings broadway actor gary johnston to team america's base in mount rushmore and asks him to use his acting skills to infiltrate a terrorist cell. unbeknownst to the team, north korean dictator and gangster kim jong-il is supplying international terrorists with weapons of mass destruction. gary infiltrates a terrorist group in cairo. the team is discovered and a chase ensues, with the team killing the terrorists. however, the city is left in ruins, drawing criticism from the film actors guild (often shown as "f.a.g." in the film), a union of liberal hollywood actors led by alec baldwin. at the base, gary tells lisa that, as a child, his acting talent caused his brother to be savagely killed by gorillas. while the two grow close and have sex, terrorists blow up the panama canal in retaliation for the cairo operation, which the film actors guild blame on team america, as well as kim chastising the terrorists for detonating one bomb too early. gary, feeling his acting talents have again resulted in innocent people dying, resigns from team america. the remaining members depart for the middle east, but are defeated and captured by north korean forces while michael moore blows up team america's base in a suicide attack. in north korea, kim invites the film actors guild and world leaders to a peace ceremony, planning to detonate a series of bombs around the globe while they are distracted. succumbing to depression, gary is reminded of his responsibility by a speech from a drunken tramp. returning to the team's base, he finds spottswoode has survived the bomb attack. after regaining spottswoode's trust by giving him a blowjob and undergoing one-day training, gary goes to north korea, where he uses his acting skills to infiltrate the base and free the team, although lisa is held hostage by kim. the team are confronted by members of the film actors guild and engage them in a fight in which most of the actors are killed. after gary uses his acting skills to save chris from susan sarandon, chris confesses to gary that the reason he dislikes actors is because he was gang-raped by the cast of the musical cats when he was years old. the team crash the peace ceremony and gary goes on stage where he convinces the world's leaders to unite, using the tramp's speech. alec baldwin can't counter gary's arguments, so kim betrays and kills baldwin, but is kicked over a balcony by lisa and impaled on a pickelhaube, exposing his true form as an enormous extraterrestrial cockroach, which flees in a spaceship, promising to return. gary and lisa happily begin a relationship and the team reunites, preparing to fight the world's terrorists once again. cast[edit] trey parker as gary johnston / joe smith / carson / kim jong-il / hans blix / matt damon / tim robbins / sean penn / michael moore / helen hunt / peter jennings / susan sarandon / drunk in bar / liv tyler / janeane garofalo matt stone as chris roth / george clooney / danny glover / ethan hawke / martin sheen kristen miller as lisa jones masasa as sarah wong daran norris as spottswoode phil hendrie as i.n.t.e.l.l.i.g.e.n.c.e. / chechen terrorist maurice lamarche as alec baldwin chelsea marguerite as french mother jeremy shada as jean francois fred tatasciore as samuel l. jackson scott land, tony urbano, and greg ballora as lead puppeteers the film also features a man dressed as a giant statue of kim il-sung, two black cats, two nurse sharks, and a cockroach, with the difference in size with the marionettes played for humorous effect. a poster of the barbi twins was featured on the billboard in times square, making the twins the only non-marionette humans in the film. production[edit] creators trey parker (left) and matt stone (right) were exhausted with production on team america and its scheduling extremes. development[edit] the film's origins involve parker and stone watching gerry and sylvia anderson's thunderbirds on television while bored. parker found that the series was unable to hold his interest as a child because "the dialogue was so expository and slow, and it took itself really seriously".[ ] the duo inquired about the rights to the series and found out that universal studios was doing a thunderbirds film directed by jonathan frakes. "we said, 'what? jonathan frakes is directing puppets?' and then we found out it was a live-action version, and we were disappointed," said parker.[ ] the two then read that the day after tomorrow had been sold to th century fox due to a one-line pitch regarding global warming, which parker and stone found "hilarious" and "insane". parker recalled stone running up to him during work at south park holding the paper, who sat down and read the synopsis regarding "sudden global warming attacking the earth". the two were in tears from laughing.[ ] the two got a copy of the script, and they soon realized that the day after tomorrow was the "greatest puppet script ever written".[ ] originally intending to do a shot-for-shot puppet parody of the day after tomorrow, parker and stone were advised by their lawyers that there could be legal repercussions.[ ] the spoof would have been called the day after the day after tomorrow, and been released a day later than the day after tomorrow.[ ] news broke of the duo signing on to create the film on october , , with stone revealing that it would be a homage to anderson.[ ] the news was confirmed in june , with variety quoting stone as saying "what we wanted was to do a send-up of these super important huge action movies that jerry bruckheimer makes."[ ] before production began, team america was championed at paramount pictures by scott rudin, who had been the executive producer for parker and stone's previous film, south park: bigger, longer & uncut. after the "hassle" of producing the south park film, parker and stone had vowed never to create another movie.[ ] other studio executives were initially unenthusiastic about the project: the studio was in favor of the film's lack of political correctness, but were confused by the use of puppets. the executives explained that they could not make profit from an r-rated puppet feature, and parker countered that similar things had been said about the south park film, an r-rated animated musical which had become a box-office hit.[ ] tom freston, who was co-president of viacom, paramount's parent company, also supported the film, feeling that paramount should make more lower-budget films that appeal to children and young adults after the studio's failures with adult-oriented films such as the stepford wives.[ ] according to parker and stone, executives were finally won over after they saw the dailies from the film's production.[ ] writing[edit] parker, stone, and longtime writing partner pam brady spent nearly two years perfecting the team america script. for influences, they studied scores of popular action and disaster films, such as alien, top gun, and s.w.a.t.[ ] the duo watched pearl harbor to get the nuances of the puppets just right when they were staring at each other, and also used ben affleck as a model.[ ] to help shape the film's archetypal heroes (from the true believer to the reluctant hero to the guy who sells out his friends for greater glory), they read the books of joseph campbell. "on one level, it's a big send-up," brady said. "but on another, it's about foreign policy".[ ] the first draft of the script was turned in well before the iraq war.[ ] the film takes aim at various celebrities, many of whom came out in opposition to the iraq war in . brady explained that the film's treatment of celebrities was derived from her annoyance at the screen time given to celebrities in the beginning of the iraq war, in lieu of foreign policy experts.[ ] filming[edit] the film's central concept was easier to conceive than to execute.[ ] team america was produced using a crew of about people, which sometimes required four people at a time to manipulate a marionette. the duo were forced to constantly rewrite the film during production due to the limited nature of the puppets. the puppet characters were created by the chiodo brothers, who previously designed puppets for films such as elf and dinosaur. the costumers of the crew were responsible for making sure the over , costumes remained in cohesive order and were realistic. production began on may , .[ ] the project was interrupted multiple times early on in production.[ ] as soon as filming began, parker and stone labored to find the right comic tone; the original script for the film contained many more jokes. after shooting the very first scene, the two realized the jokes were not working, and that the humor instead came from the marionettes.[ ] "puppets doing jokes is not funny," stone found. "but when you see puppets doing melodrama, spitting up blood and talking about how they were raped as children, that's funny."[ ] filming was done by three units shooting different parts at the same time. occasionally, the producers had up to five cameras set up to capture the scene.[ ] the film was mainly based on the cult classic action film megaforce, of which parker and stone had been fans. many ideas had been copied such as the flying motorcycle sequence. the film was painstakingly made realistic, which led to various shots being re-done throughout the process due to parker and stone's obsession with detail and craftsmanship. for example, a tiny uzi cost $ , to construct, and kim jong-il's eyeglasses were made with hand-ground prescription lenses.[ ] although the filmmakers hired three dozen marionette operators, simple performances from the marionettes were nearly impossible; a simple shot such as a character drinking might take a half-day to complete successfully.[ ] parker and stone agreed during production of team america that it was "the hardest thing [they'd] ever done." rather than rely on computer-generated special effects added in post-production, the filmmakers vied to capture every stunt live on film.[ ] parker likened each shot to a complicated math problem.[ ] the late september deadline for the film's completion[ ][ ] took a toll on both filmmakers, as did various difficulties in working with puppets, with stone, who described the film as "the worst time of [my] life," resorting to coffee to work -hour days, and sleeping pills to go to bed.[ ] the film was barely completed in time for its october release date. at a press junket in los angeles on october , journalists were only shown a -minute reel of highlights because there was no finished print.[ ] many of the film's producers had not seen the entire film with the sound mix until the premiere.[ ] editing[edit] "it's a back-and-forth with the board. they said it can't be as many positions, so we cut out a couple of them. we love the golden shower, but i guess they said no to that. but i just love that they have to watch it. seriously, can you imagine getting a videotape with just a close-up of a puppet asshole, and you have to watch it?" —trey parker on the clashes between him and stone and the mpaa[ ] even before the scene's submission to the motion picture association of america, parker planned to "have fun" pushing the limits by throwing in the graphic sex scene.[ ] the duo knew the racy film would be met with some opposition, but were outraged when the film came back with their harshest rating, nc- . the original cut's minute-and-a-half sex scene with gary and lisa was cut down to seconds. the original scene also featured the two puppets urinating and defecating on one another,[ ] which was based on what children do humorously with dolls such as barbie and ken. at least nine edits of the puppet love scene were shown to the mpaa before the board accepted that it had been toned down enough to qualify for an r rating.[ ] parker contrasted the mpaa's reluctance for the sex scene to their acceptance of the violence: "meanwhile, we're taking other puppets and, you know, blowing their heads off, they're covered with blood and stuff, and the mpaa didn't have a word to say about that."[ ] in addition to the sex scene, the mpaa also objected to the scene in which the hans blix puppet is eaten by sharks.[ ] stone and parker had faced a similar conflict with their previous film south park: bigger, longer & uncut in .[ ] music[edit] harry gregson-williams – "team america march" main theme of the film team america: world police. problems playing this file? see media help. the film's score was composed by harry gregson-williams. the soundtrack also contains "magic carpet ride" performed by steppenwolf, "battle without honor or humanity" performed by tomoyasu hotei, "forbidden bitter-melon dance" performed by jeff faustman, "bu dunyada askindan olmek" performed by kubat, and songs by trey parker including "everyone has aids", "freedom isn't free", "america, fuck yeah", "america, fuck yeah (bummer remix)", "derka derk (terrorist theme)", "only a woman", "i'm so ronery", "the end of an act", "montage" and "north korean melody". individuals parodied in the film[edit] famous people depicted as puppets, and lampooned, in the film include michael moore, alec baldwin, sean penn, tim robbins, helen hunt, george clooney, liv tyler, martin sheen, susan sarandon, janeane garofalo, matt damon, samuel l. jackson, danny glover, ethan hawke, kim jong-il, muammar gaddafi, tony blair, queen elizabeth ii, sultan qaboos of oman, fidel castro, peter jennings and hans blix. almost all of them are killed in gory and violent ways. north korean leader kim jong-il was parodied in the film, and the democratic people's republic of korea asked the czech republic to ban it.[ ] reactions from those parodied were mixed; baldwin found the project "so funny",[ ] and expressed interest in lending his voice to his character.[ ] in a video interview with time, baldwin related how his daughter's classmates would recite kim jong-il's line to him, "you are useress to me, arec bardwin. [sic]"[ ] sean penn, who is portrayed making outlandish claims about how happy and utopian iraq was before team america showed up, sent parker and stone an angry letter inviting them to tour iraq with him, ending with the words "fuck you".[ ] both george clooney and matt damon are said to be friends with stone and parker, and clooney has stated that he would have been insulted had he not been included in the film.[ ] damon is portrayed as a simpleton who can only say his own name. when asked about the film in , damon stated that he was confused by the portrayal, given that he was already known as both "a screenwriter and an actor": i was always bewildered by that, and i never talked to trey and matt about that. and incidentally, i believe those two are geniuses, and i don't use that word lightly. i think they are absolute geniuses, and what they've done is awesome and i'm a big fan of theirs, but i never quite understood that one.[ ] stone and parker had earlier stated in an interview that they were inspired to give the damon character that personality only after seeing the puppet that was made for him, which "looked kind of mentally deficient".[ ] kim jong-il, a noted film buff,[ ] never commented publicly about his depiction in team america: world police, although shortly after its release north korea asked the czech republic to ban the film; the country refused saying that north koreans had been rebuffed in their effort to undermine the czech republic's post-communist-era freedom.[ ] the filmmakers acknowledged this in a dvd extra and jokingly suggested he sing "i'm so ronery". michael moore is depicted as a fat, hot dog–eating glutton who partakes in suicide bombing and is referred to as a "giant socialist weasel" by i.n.t.e.l.l.i.g.e.n.c.e. stone explained the reason for this portrayal in an msnbc interview: we have a very specific beef with michael moore, […] i did an interview, and he didn't mischaracterize me or anything i said [in bowling for columbine]. but what he did do was put this cartoon [titled "a brief history of the united states of america", written by moore, animated and directed by harold moss] right after me that made it look like we did that cartoon.[ ] a deleted scene also shows meryl streep and ben affleck (who is portrayed with a real-life hand replacing his head). release[edit] the world premiere of team america: world police took place on october , in hollywood, california. the united states premiere was on october , at the denver film festival. paramount pictures released the film in the united states on october , . home media[edit] the film was released on dvd and vhs in the united states on may , by paramount home entertainment, available in both r-rated and unrated versions. the film was released on blu-ray disc on august , in the united states.[ ] reception[edit] critical response[edit] on rotten tomatoes, the film has a % approval rating based on reviews and an average score of . / . the site's consensus states, "team america will either offend you or leave you in stitches. it'll probably do both."[ ] on metacritic the film has a score of out of based on reviews from critics, indicating "generally favorable reviews".[ ] audiences surveyed by cinemascore gave the film a "b" grade on an a+-to-f scale.[ ] peter travers of rolling stone praised the film calling it "a ruthlessly clever musical, a punchy political parody and the hottest look ever at naked puppets."[ ] kirk honeycutt of the hollywood reporter wrote: "team america: world police is to political commentary what lap dancing is to ballet. there is no room for subtlety. aiming a rude, foul-mouthed political satire everywhere -- left, right and center -- trey parker and matt stone blow up a good deal of the world, not to mention the egos of many hollywood personalities."[ ] brian lowry of variety was positive about the satire, saying the film "goes the extra mile to piss off everybody — which includes gleefully destroying renowned hollywood liberals, literally and figuratively" but less positive about other aspects of the film "all told, the clever visual bits and hilarious songs don't entirely compensate for the many flat or beyond-over-the-top spells." lowry praised the songs saying they "deliver the movie's biggest highlights" and he also praised the production design calling it a " true technical achievement, recreating a dizzying array of sets and costumes at one-third scale and clearly having plenty of fun doing so — down to using housecats as stand-ins for terrifying panthers."[ ] richard corliss of time also highlighted the production "the real kick, however, is in the grandeur and detail of the production design, by jim dultz and david rockwell."[ ] kim newman of empire magazine called it "a patchy comedy that's stronger as a genre-mocker than a political satire."[ ] roger ebert gave the film out of stars and wrote: "i wasn't offended by the movie's content so much as by its nihilism", and was critical of the film's "sneer at both sides" approach, comparing it to "a cocky teenager who's had a couple of drinks before the party, they don't have a plan for who they want to offend, only an intention to be as offensive as possible."[ ] national review online has named the film # in its list of "the best conservative movies". brian c. anderson wrote, "the film's utter disgust with air-headed, left-wing celebrity activism remains unmatched in popular culture."[ ][better source needed] political and social commentator andrew sullivan considers the film brilliant in its skewering of both the left and right's approach on terrorism.[citation needed] sullivan (a fan of stone and parker's other work, as well) popularized the term "south park republican" to describe himself and other like-minded fiscal conservatives/social libertarians. parker himself is a registered libertarian.[ ] before the film's release, it was criticized by matt drudge and conservative group move america forward for mocking the war on terror.[ ] before team america was released, statements were released by a "senior bush administration official" condemning the film. upon receiving the news, the duo called and found it was instead a "junior staffer," causing stone to quip "what is it – junior or senior? what are we talking about here? who knows? it might have been the janitor." the two eventually decided it was free publicity, with which they were fine.[ ] some media outlets interpreted the film's release on october to be in theaters before the november elections. parker said the release date had nothing to do with the elections, and the date was pushed back as far as possible due to production delays, but they had to return to south park by october .[ ] thunderbirds creator gerry anderson was supposed to have met parker before production, but they cancelled the meeting, acknowledging he would not like the film's expletives. anderson saw the completed film and felt "there are good, fun parts [in the film] but the language wasn't to my liking."[ ] box office[edit] team america earned $ , , in its opening u.s. weekend, ranking number three behind shark tale and friday night lights. the film eventually grossed a total of $ , , , with $ , , in u.s. domestic receipts and $ , , in international proceeds.[ ] filmmakers' response[edit] in an interview with matt stone following the film's release,[ ] anwar brett of the bbc asked the following question. "for all the targets you choose to take pot-shots at," he asked, "george w. bush isn't one of them. how come?" matt stone replied, "if you want to see bush-bashing in america you only have to walk about feet to find it. trey and i are always attracted to what other people aren't doing. frankly that wasn't the movie we wanted to make." in another interview, parker and stone further clarified the end of the film which seems to justify the role of the united states as the "world police."[ ] because that's the thing that we realized when we were making the movie. it was always the hardest thing. we wanted to deal with this emotion of being hated as an american. that was the thing that was intriguing to us, and having gary the main character deal with that emotion. and so, him becoming ashamed to be a part of team america and being ashamed of himself, he comes to realize that, just as he got his brother killed by gorillas—he didn't kill his brother; he was a dick, he wasn't an asshole—so too does america have this role in the world as a dick. cops are dicks, you fucking hate cops, but you need 'em. awards[edit] in a worldwide survey of comedians by the guardian the film was ranked seventh on a list of the fifty funniest films.[ ] in , the guardian listed the film as the th greatest comedy film of all time.[ ] in the guardian ranked the film tenth on its list of best films of the st century.[ ] director quentin tarantino counted team america: world police in his list of top films released since , when his career as a filmmaker began and director edgar wright named it as one of his , favorite films.[ ][ ] award category nominee result empire award best comedy team america: world police won golden schmoes best comedy of the year best music in a movie most memorable scene in a movie team america: world police nominated golden trailer best comedy team america: world police nominated ifmca award best original score for a comedy film harry gregson-williams nominated golden reel award best sound editing in feature film – animated bruce howell (supervising sound editor) beth sterner (supervising sound editor) thomas w. small (supervising foley editor) lydia quidilla (supervising dialogue editor) robert ulrich (supervising adr editor) chuck michael (sound effects editor) peter zinda (sound effects editor) jon title (sound effects editor) michael kamper (sound effects editor) doug jackson (sound effects editor) cary butler (sound effects editor/dialogue editor) fred burke (foley editor) scott curtis (foley editor) nic ratner (music editor) nominated mtv movie award best action sequence team america: world police nominated ofcs award best animated feature team america: world police nominated people's choice award favorite animated movie team america: world police nominated golden satellite award best motion picture, animated or mixed media team america: world police nominated teen choice award choice movie: animated/computer generated team america: world police nominated soundtrack[edit] team america: world police - music from the motion picture soundtrack album by various artists released october , january , (cd) recorded genre musical film score length : label atlantic producer trey parker matt stone executive producers: scott rudin scott aversano anne garefino harry gregson-williams film scores chronology return to sender ( ) team america: world police - music from the motion picture ( ) bridget jones: the edge of reason ( ) the film's soundtrack was released on october , and on cd on january , by atlantic records. all songs are written and performed by trey parker, except where indicated. no. title writer(s) artist length . "everyone has aids" trey parker marc shaiman   : . "freedom isn't free"     : . "america, fuck yeah"     : . "derka derk (terrorist theme)"     : . "only a woman"     : . "i'm so ronery"     : . "america, fuck yeah (bummer remix)"     : . "the end of an act"     : . "montage"     : . "north korea melody"     : . "the team america march" stephen barton harry gregson-williams james mckee smith harry gregson-williams : . "lisa & gary" gregson-williams smith gregson-williams : . "f.a.g." gregson-williams steve jablonsky gregson-williams : . "putting a jihad on you" gregson-williams jablonsky gregson-williams : . "kim jong-il" gregson-williams jablonsky smith gregson-williams : . "mount, rush, more" barton gregson-williams gregson-williams : total length: : featured songs not included in the soundtrack: name artist "magic carpet ride" steppenwolf "battle without honor or humanity" tomoyasu hotei "forbidden bitter-melon dance" jeff faustman "bu dünyada aşkından Ölmek" kubat (singer legacy[edit] in the aftermath of the december terrorism threats by guardians of peace on showings of the film the interview, which resulted in sony pictures pulling the film from release,[ ] several theatres, including alamo drafthouse cinema in austin, texas, protested the loss by scheduling free showings of team america: world police.[ ] however, paramount pulled distribution of team america from theaters, including those in cleveland, atlanta, and new orleans.[ ][ ][ ] this action was seen by president barack obama as an attack on freedom of speech by hollywood studios,[ ] and others as an act of pure cowardice.[ ] snippets of the film mocking kim jong-il are reportedly set to be included, alongside copies of the interview, in helium-filled balloons launched by north korean defectors into their home country in an effort to inspire education on the western world's views on it.[ ] see also[edit] list of films set in or about north korea list of live-action puppet films references[edit] ^ a b "catalog – team america: world police". american film institute. retrieved december , . duration (in mins): […] countries: germany, united states ^ "team america – world police". british board of film classification. november , . retrieved june , . approved running time m s ^ "team america world police ( )". british film institute. retrieved june , . countries: germany, usa ^ a b c d "team america: world police". box office mojo. retrieved december , . ^ a b c d e f "puppetry of the meanest". in focus. october , . archived from the original on july , . retrieved june , . ^ a b c d "trey and matt string together team america". zap it. august , . archived from the original on july , . retrieved june , . ^ a b parker, trey (march ). south park: the complete ninth season: "two days before the day after tomorrow" (dvd audio commentary). paramount home entertainment. ^ a b c d e "interview: matt stone/trey parker". mediasharx. october , . archived from the original on march , . retrieved june , . ^ a b c bowles, scott (october , ). "parker, stone pull team strings, yank a few chains". usa today. retrieved june , . ^ friedman, roger (october , ). "south park creators pull the strings". fox news channel. retrieved june , . ^ sauriol, patrick (june , ). "'south park creators prepare team america'". mania.com (source: variety). archived from the original on october , . retrieved june , . ^ a b c d e f g h i john horn (september , ). "launching a small-scale offensive". los angeles times. retrieved june , . ^ a b havrilesky, heather (october , ). "puppet masters". salon. retrieved june , . ^ a b c "puppetmasters". rolling stone. october , . archived from the original on july , . retrieved june , . ^ "stone says team america was 'lowest point'". the guardian. december , . retrieved june , . ^ friedman, roger (october , ). "team america: sex, puppets & controversy". fox news channel. retrieved june , . ^ "r sex for team america puppets". e!. october , . archived from the original on july , . retrieved june , . ^ "hollywood's new puppetmasters". columbia chronicle. october , . archived from the original on july , . retrieved june , . ^ "south park stars upset over puppet sex censorship". contactmusic. october , . retrieved june , . ^ weinraub, bernard (june , ). "loosening a strict film rating for south park". the new york times. retrieved march , . ^ a b "'team america' unsettles team kim in pyongyang". worldtribune.com. retrieved november , . ^ a b "alec baldwin on tracy morgan and kim jong-il". live questions event. time magazine official youtube channel. retrieved june , . ^ "team america speaks!". movieweb. october , . retrieved july , . ^ "letter by sean penn". drudgereport archives. october , . retrieved september , . ^ "clooney supports team america duo". january , . […] the hollywood big-hitters all insist they would have been offended to be left out of the film. ^ i am matt damon, ask me anything!, reddit, july , ^ shepherd, jack (july , ). "matt damon reveals why he's cool with team america". the independent. ^ "north korean leader loves hennessey, bond movies", cnn, january , ^ "team america takes on moviegoers". today.com. october , . ^ "team america: world police blu-ray". blu-ray.com. archived from the original on august , . retrieved may , . ^ "team america: world police ( )". rotten tomatoes. retrieved may , . ^ "team america: world police". metacritic. retrieved may , . ^ "team america: world police ( ) b". cinemascore. archived from the original on december , . ^ travers, peter (october , ). "team america: world police". rolling stone. ^ "'team america: world police': thr's review". the hollywood reporter. ^ lowry, brian (october , ). "team america: world police". variety. ^ "when puppets get political - time". june , . archived from the original on june , . retrieved december , . ^ kim newman (january , ). "team america: world police". empire (film magazine). ^ ebert, roger (october , ). "team america: world police movie review ( )". chicago sun-times. ^ miller, john (february , ). "the best conservative movies". national review. retrieved august , . ^ winter, bill. "trey parker – libertarian". advocates for self-government. archived from the original on january , . retrieved december , . when asked to describe his politics, parker said he was "a registered libertarian." ^ hassan, genevieve (october , ). "talking shop: gerry anderson". bbc news. retrieved december , . ^ "interview with matt stone". bbc. ^ "puppet masters – interview with matt stone and trey parker". salon. archived from the original on december , . ^ guardian staff (december , ). "the funniest films… chosen by comedians". the guardian. retrieved december , . ^ "team america: world police: no best comedy film of all time". the guardian. october , . retrieved june , . ^ "the best films of the st century". the guardian. retrieved september , . ^ brown, lane. "team america, anything else among the best movies of the past seventeen years, claims quentin tarantino". vulture. new york media llc. retrieved september , . ^ sam disalle. "edgar wright's favorite movies". mubi. retrieved july , . ^ humphries, rusty (december , ). "sony needs 'team america: world police'". washington times. ^ burlingame, russ (december , ). "the interview to be replaced by team america: world police at alamo drafthouse". comicbook.com. ^ lieberman, david (december , ). "paramount cancels team america showings, theaters say". deadline hollywood. ^ "paramount pulls screenings of 'team america: world police,' theaters say." los angeles times. retrieved december , . ^ "team america: world police screenings canceled in wake of controversy over the interview." the times-picayune (nola.com). retrieved december , . ^ weigel, david (december , ). "first 'the interview,' now theaters cancel protest screenings of 'team america'". bloomberg politics. ^ "cave! 'team america' screenings to replace 'the interview' pulled by paramount". the inquisitor. december , . ^ bond, paul (april , ). "'the interview' sequel: inside the frightening battle raging on the north korean border". the hollywood reporter. moriarty visits matt & trey on the team america set! – set report from aintitcool.com team america: world police – synopsis, clips and images from latinoreview.com (october ). play: south park's puppet regime. wired . . retrieved october , . bbc interview with matt stone team america – guy in bar philosophy by composer m.regtien further reading[edit] dubowsky, jack curtis ( ). "team america: world police: duplicitous voices of the socio-political spy musical". in donnelly, kevin j.; carroll, beth (eds.). contemporary musical film. edinburgh: edinburgh university press. pp.  – . isbn  - - - - . external links[edit] wikiquote has quotations related to: team america: world police team america: world police at imdb team america: world police at the big cartoon database team america: world police at box office mojo v t e trey parker and matt stone parker filmography stone filmography feature films cannibal! the musical ( ) orgazmo ( ) baseketball ( ) south park: bigger, longer & uncut ( ) team america: world police ( ) television time warped ( ) south park (since ) that's my bush! ( ) sassy justice ( ) music chef aid: the south park album south park: bigger, longer & uncut soundtrack "blame canada" "timmy and the lords of the underworld" the book of mormon: original broadway cast recording theatre the book of mormon (since ) video games south park: the stick of truth ( ) south park: the fractured but whole ( ) characters parker stan marsh eric cartman jimmy valmer randy marsh mr. garrison stone kyle broflovski kenny mccormick butters stotch gerald broflovski see also the spirit of christmas your studio and you princess days to air v t e empire award for best comedy team america: world police ( ) little miss sunshine ( ) hot fuzz ( ) son of rambow ( ) in the loop ( ) four lions ( ) the inbetweeners movie ( ) ted ( ) alan partridge: alpha papa ( ) paddington ( ) spy ( ) the greasy strangler ( ) the death of stalin ( ) retrieved from "https://en.wikipedia.org/w/index.php?title=team_america:_world_police&oldid= " categories: films english-language films action comedy films s adventure comedy films black comedy films s parody films s satirical films american action comedy films american adventure comedy films american aviation films american black comedy films american films american parody films american political satire films cultural depictions of kim jong-il films about terrorism films directed by trey parker films featuring puppetry films produced by scott rudin films scored by harry gregson-williams films set in egypt films set in new york city films set in north korea films set in panama films set in paris films set in south dakota films set in washington, d.c. films with screenplays by pam brady hiv/aids in film marionette films paramount pictures films works by trey parker and matt stone comedy films hidden categories: articles with short description short description is different from wikidata use mdy dates from may template film date with release dates articles using infobox film with incorrectly placed links articles with haudio microformats all articles lacking reliable references articles lacking reliable references from june all articles with unsourced statements articles with unsourced statements from june album infoboxes lacking a cover album articles lacking alt text for covers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikiquote languages العربية català deutsch español فارسی français galego 한국어 italiano עברית magyar bahasa melayu nederlands 日本語 norsk bokmål norsk nynorsk oʻzbekcha/ўзбекча polski português Русский Српски / srpski suomi svenska ไทย türkçe edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement fedora migration paths and tools project update: july - duraspace.org projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter about duraspace projects services community membership support news & events projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter home » latest news » fedora migration paths and tools project update: july fedora migration paths and tools project update: july posted on july , by david wilcox this is the latest in a series of monthly updates on the fedora migration paths and tools project – please see the previous post for a summary of the work completed up to that point. this project has been generously funded by the imls. we completed some final performance tests and optimizations for the university of virginia pilot. both the migration to their aws server and the fedora . indexing operation were much slower than anticipated, so the project team tested a number of optimizations, including: adding more processing threads increasing the size of the server instance  using a separate and larger database server  using locally attached flash storage fortunately, these improvements made a big difference; for example, ingest speed was increased from . resources per second to . resources per second. in general, this means that institutions with specific performance targets can use a combination of parallel processing and increased computational resources. feedback from this pilot has been incorporated into the migration guide, updates to the migration-utils to improve performance, updates to the aws-deployer tool to provide additional options, and improvements to the migration-validator to handle errors. the whitman college team has begun their production migration using islandora workbench. initial benchmarking has shown that running workbench from the production server rather than locally on a laptop achieves much better performance, so this is the recommended approach. the team is working collection-by-collection using csv files and a tracking spreadsheet to keep track of each collection as it is ingested and ready to be tested. they have also developed a quality control checklist to make sure everything is working as intended – we anticipate doing detailed checks on the first few collections and spot checks for subsequent collections. as we near the end of the pilot project phase of the grant work we are focused on documentation for the migration toolkit. we plan to complete a draft of this documentation over the summer, after which this draft will be shared with the broader community for feedback. we will organize meetings in the fall to provide opportunities for community members to provide additional feedback on the toolkit and make suggestions for improvements. tags: blog, fedora, fedora repository, news, open source duraspace articles recent articles rss feeds tags announcements ( ) blog ( ) cloud ( ) coar ( ) communication ( ) community ( ) conferences ( ) data curation ( ) dspace ( ) dspace ( ) dspacedirect ( ) duracloud ( ) duraspace ( ) duraspace digest ( ) education ( ) events ( ) fedora ( ) fedora repository ( ) governance ( ) higher education ( ) hydra ( ) islandora ( ) linked data ( ) lyrasis ( ) lyrasis digest ( ) meetings ( ) ndsa ( ) news ( ) open access ( ) open data ( ) open repositories ( ) open source ( ) preservation and archiving ( ) professional development ( ) registered service provider ( ) repository ( ) samvera ( ) scholarly publishing ( ) sparc ( ) technology ( ) vivo ( ) vivo camp ( ) vivo conference ( ) vivo updates ( ) web seminar ( ) about about duraspace history what we do board of directors meet the team policies reports community our users community programs service providers strategic partners membership values & benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter this work is licensed under a creative commons attribution . international license go west ( film) - wikipedia go west ( film) from wikipedia, the free encyclopedia jump to navigation jump to search film by buster keaton go west directed by buster keaton written by buster keaton lex neal produced by buster keaton joseph m. schenck starring buster keaton kathleen myers howard truesdale ray thompson cinematography bert haines elgin lessley music by konrad elfers distributed by metro-goldwyn-mayer release date november  ,   ( - - ) running time minutes country united states language silent (english intertitles) go west is an american silent comedy film directed by and starring buster keaton.[ ][ ][ ] keaton portrays friendless, who travels west to try to make his fortune. once there, he tries his hand at bronco-busting, cattle wrangling and dairy farming, eventually forming a bond with a cow named "brown eyes." eventually he finds himself leading a herd of cattle through los angeles. seventy years after the release of the film guitarist bill frisell recorded a soundtrack accompaniment go west: music for the films of buster keaton ( ). the rats & people motion picture orchestra premiered its new score for the film in . contents plot cast references external links plot[edit] a drifter identified only as "friendless" (keaton) sells the last of his possessions, keeping only a few trinkets and a picture of his mother. the money buys him only some bread and a sausage then is gone. unable to find a job in the city, he stows away on a train. he thinks it is going to new york but it is heading west. he sleeps in a barrel but the barrel rolls of the train. he manages to get a job at a cattle ranch despite having no experience. meanwhile, a neglected cow named brown eyes fails to give milk and is sent out to the field along with the other cattle intended for slaughter. as friendless tries to figure out how to milk a cow, he's told to go out and help the other ranch hands bring in the cattle. unsuccessful in riding a horse, he falls off and sees brown eyes. noticing her limp, friendless examines her hoof and removes the rock that had been hurting her. brown eyes proceeds to follow friendless around, saving him from a bull attack. realizing that he's finally found a companion, friendless strikes up a friendship with the cow, giving her his blanket at night and attempting to protect her from wild dogs. the next day, brown eyes follows friendless everywhere, much to the chagrin of the other ranch hands. friendless accidentally sets two steers loose after they'd been corralled in, but on the joking suggestion of the other hands, brings them back in by waving his red bandanna. the ranch owner (truesdale) and his daughter (myers) are preparing to sell the cattle to a stockyard, though another rancher wants to hold out for a higher price. the owner, no longer wanting to wait, prepares to ship the whole herd out. friendless, shocked to hear that brown eyes will go to a slaughterhouse, refuses to let her go. the ranch owner fires him and gives him his wages. friendless tries to buy his friend back with his earnings, but is told that it's not enough. after failing to get more money from a card game, he joins brown eyes in the cattle car and tries to find a way to free her. the train is ambushed by the other rancher and his men. friendless and the ranch owner's other hands manage to drive off the attackers, but only friendless makes it back to the train as the others chase away the rancher. arriving in los angeles, friendless frees brown eyes and leads her away, using his red bandana once more to guide the other thousand steers to the stockyard. the townspeople are terrified of the cattle as some of the cows break away and begin entering the stores, but friendless manages to corral them together. friendless ties brown eyes up before going back to retrieve the other cattle, leaving his red bandana with her in order to keep her cool. realizing his mistake, he enters a masquerade store to find something red to attract the cows. deciding on a red devil's outfit, he exits the store and the cattle begin to chase him. the police attempt to arrest him, but are mistakenly sprayed with hoses from the fire department, who flee once they see the cattle coming. the ranch owner, realizing his ruin if the cattle are not sold, drives with his daughter to the stockyard. the owner tells him that no cattle have arrived yet. defeated, the ranch owner prepares to leave when he sees friendless leading the herd into the stockyard. overjoyed, the ranch owner tells friendless that his house and anything he owns is his to ask for. friendless says that he only wants "her," gesturing behind him to where the ranch owner's daughter is. the owner is surprised and the daughter flattered, but they quickly realize that it's brown eyes that he's referring to. the three drive back to the ranch, with brown eyes beside friendless in the back seat. cast[edit] buster keaton as friendless howard truesdale as ranch owner kathleen myers as ranch owner's daughter ray thompson as ranch foreman brown eyes as the cow 'brown eyes' roscoe 'fatty' arbuckle as woman in department store (uncredited) joe keaton as man in barber shop (uncredited) gus leonard as general store owner (uncredited) babe london as woman in department store (uncredited) references[edit] ^ "go west! buster keaton in good form". the register (adelaide). xci ( , ). south australia. december , . p.  . retrieved january , – via national library of australia. ^ keaton, buster, - ; shephard, david; kino international corporation ( ), the art of buster keaton, kino international corp, retrieved january , cs maint: multiple names: authors list (link) ^ prod co: metro goldwyn productions; keaton, buster, - . (actor); schenck, joseph m., - . (producer); lessley, elgin. (photographer); haines, bert. (photographer); cannon, raymond. (writer of accompanying material); neal, lex. (director); metro goldwyn productions ( ), go west, a metro goldwyn production, retrieved january , cs maint: multiple names: authors list (link) external links[edit] wikimedia commons has media related to go west ( film). "go west" full movie at archive.org go west at imdb go west at allmovie go west at the international buster keaton society v t e films directed by buster keaton shorts ( – ) the rough house one week convict the scarecrow neighbors the haunted house hard luck the high sign the goat the playhouse the boat the paleface cops my wife's relations the blacksmith the frozen north the electric house day dreams the balloonatic the love nest feature films three ages our hospitality sherlock jr. the navigator seven chances go west battling butler the general college steamboat bill, jr. the cameraman spite marriage shorts ( – ) (for educational pictures) the gold ghost allez oop tars and stripes grand slam opera blue blazes mixed magic love nest on wheels authority control general integrated authority file (germany) viaf worldcat (via viaf) national libraries united states retrieved from "https://en.wikipedia.org/w/index.php?title=go_west_( _film)&oldid= " categories: films s western (genre) comedy films american films american silent feature films american western (genre) comedy films american black-and-white films metro-goldwyn-mayer films films directed by buster keaton films produced by joseph m. schenck films with screenplays by buster keaton comedy films films set in los angeles hidden categories: cs maint: multiple names: authors list use mdy dates from september articles with short description short description matches wikidata template film date with release date commons category link is on wikidata wikipedia articles with gnd identifiers wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with worldcat-viaf identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية deutsch فارسی français italiano Русский svenska Українська edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement terry's worklog terry's worklog on my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) marcedit update round-up a handful of updates have been posted related to marcedit . since the program cam out of beta.&# ; these have been mostly bug fixes and small enhancements.&# ; here’s the full list: bug fix: oclc search – multiple terms would result in an error if ‘or’ was used with specific search indexes. fixed: / enhancement: oclc&# ; continue reading marcedit update round-up marcedit . .x/marcedit mac . .x: coming out of beta marcedit . /marcedit mac . is officially out of beta.&# ; it has been my primary version of marcedit for about months and is where all new development has taken place since dec. .&# ; because there are significant changes (including framework supported) – marcedit . / . are not in-place upgrades.&# ; previous versions of marcedit can be installed&# ; continue reading marcedit . .x/marcedit mac . .x: coming out of beta exploring bibframe workflows in marcedit update: / / : i uploaded a video with sound that demonstrates the process.  you can find it here: during this past year while working on marcedit . .x/ . .x, i’ve been giving some thought to how i might be able to facilitate some workflows to allow users to move data to and from bibframe.  while the tools has&# ; continue reading exploring bibframe workflows in marcedit thoughts on nacos proposed process on updating cjk records i would like to take a few minutes and share my thoughts about an updated best practice recently posted by the pcc and naco related to an update on cjk records. the update is found here: https://www.loc.gov/aba/pcc/naco/cjk/cjk-best-practice-ncr.docx. i’m not certain if this is active or a simply a proposal, but i’ve been having a number&# ; continue reading thoughts on nacos proposed process on updating cjk records marcedit . update changelog: https://marcedit.reeset.net/software/update .txt highlights preview changes one of the most requested features over the years has been the ability to preview changes prior to running them.&# ; as of . . – a new preview option has been added to many of the global editing tools in the marceditor.&# ; currently, you will find the preview option attached to&# ; continue reading marcedit . update how do i generate marc authority records from the homosaurus vocabulary? step by step instructions here: https://youtu.be/fjsdqi pzpq ok, so last week, i got an interesting question on the listserv where a user asked specifically about generating marc records for use in one’s ils system from a jsonld vocabulary.&# ; in this case, the vocabulary in question as homosaurus (homosaurus vocabulary site) – and the questioner was specifically&# ; continue reading how do i generate marc authority records from the homosaurus vocabulary? marcedit: state of the community * - * sigh &# ; original title said - .  obviously, this is for this past year (jan. -dec. , ).   per usual, i wanted to take a couple minutes and look at the state of the marcedit project. this is something that i try to do once a year to gauge the current health of the community,&# ; continue reading marcedit: state of the community * - marcedit . .x/ . .x (beta) updates versions are available at: https://marcedit.reeset.net/downloads information about the changes: . . change log: https://marcedit.reeset.net/software/update .txt . . change log: https://marcedit.reeset.net/software/update .txt if you are using .x – this will prompt as normal for update. . .x is the beta build, please be aware i expect to be releasing updates to this build weekly and also expect to find some issues.&# ; continue reading marcedit . .x/ . .x (beta) updates marcedit . .x/macos . .x timelines i sent this to the marcedit listserv to provide info about my thoughts around timelines related to the beta and release.&# ; here’s the info. dear all, as we are getting close to feb. (when i’ll make the . beta build available for testing) – i wanted to provide information about the update process going&# ; continue reading marcedit . .x/macos . .x timelines marcedit . change/bug fix list * updated; / change: allow os to manage supported supported security protocol types. change: remove com.sun dependency related to dns and httpserver change: changed appdata path change: first install automatically imports settings from marcedit . - .x change: field count &# ; simplify ui (consolidate elements) change: windows &# ; update help urls to oclc change: generate fast&# ; continue reading marcedit . change/bug fix list dshr's blog: library of congress storage architecture meeting dshr's blog i'm david rosenthal, and this is a place to discuss the work i'm doing in digital preservation. thursday, january , library of congress storage architecture meeting .the library of congress has finally posted the presentations from the designing storage architectures for digital collections workshop that took place in early september, i've greatly enjoyed the earlier editions of this meeting, so i was sorry i couldn't make it this time. below the fold, i look at some of the presentations. robert fontana & gary decad as usual, fontana and decad provided their invaluable overview of the storage landscape. their key points include: [slide ] the total amount of storage manufactured each year continues its exponential growth at around %/yr. the vast majority ( %) of it is hdd, but the proportion of flash ( %) is increasing. tape remains a very small proportion ( %). [slide ] they contrast this % growth in supply with the traditionally ludicrous % growth in "demand". their analysis assumes one byte of storage manufactured in a year represents one byte of data stored in that year, which is not the case (see my post where did all those bits go? for a comprehensive debunking). so their supposed "storage gap" is actually a huge, if irrelevant, underestimate. but they hit the nail on the head with: key point: hdd % of bits and % of revenue, nand % of bits and % of revenue". [slide ] the kryder rates for nand flash, hdd and tape are comparable; $/gb decreases are competitive with all technologies. but, as i've been writing since at least 's storage will be a lot less free than it used to be, the kryder rate has decreased significantly from the good old days: $/gb decreases are in the %/yr range and not the classical moore’s law projection of %/yr associated with areal density doubling every years as my economic model shows, this makes long-term data storage a significantly greater investment. [slide ] in flash was . times as expensive as hdd. in the ratio was times. thus, despite recovering from 's supply shortages, flash has not made significant progress in eroding hdd's $/gb advantage. by continuing current trends, they project that by flash will ship more bytes than hdd. but they project it will still be times as expensive per byte. so they ask a good question: in is there demand for x more manufactured storage annually and is there sufficient value for this storage to spend $ b more annually ( . x) for this storage? jon trantham jon trantham of seagate confirmed that, as it has been for a decade, the date for volume shipments of hamr drives is still slipping in real time; "seagate is now shipping hamr drives in limited quantities to lead customers". his presentation is interesting in that he provides some details of the extraordinary challenges involved in manufacturing hamr drives, with pictures showing how small everything is: the height from the bottom of the slider to the top of the laser module is less than um the slider will fly over the disk with an air-gap of only - nm as usual, i will predict that the industry is far more likely to achieve the % cagr in areal density line on the graph than the % line. note the flatness of the "hdd product" curve for the last five years or so. tape the topic of tape provided a point-counterpoint balance. gary decad and robert fontana from ibm made the point that tape's roadmap is highly credible by showing that: tape, unlike hdd, has consistently achieved published capacity roadmaps and that: for the last years, the ratio of manufactured eb of tape to manufactured eb of hdd as remained constant in the . % range and that: unlike hdd, tape magnetic physics is not the limiting issues since tape bit cells are x larger than hdd bit cells ... the projected tape areal density in ( gbit/in ) is x smaller than today’s hdd areal density and has already been demonstrated in laboratory environments. carl watts' issues in tape industry needed only a few bullets to make his counterpoint that the risk in tape is not technological: ibm is the last of the hardware manufacturers: ibm is the only builder of lto ibm is the only vendor left with enterprise class tape drives if you only have one manufacturer how do you mitigate risk? these cloud archival solutions all use tape: amazon aws glacier and glacier deep ($ /tb/month) azure general purpose v storage archive ($ /tb/month) google gcp coldline($ /tb/month) if it's all the same tape, how do we mitigate risk? if, as decad and fontana claim: tape storage is strategic in public, hybrid, and private “clouds” then ibm has achieved a monopoly, which could have implications for tape's cost advantage. jon trantham's presentation described seagate's work on robots, similar to tape robots and the blu-ray robots developed by facebook, but containing hard disk cartridges descended from those we studied in 's predicting the archival life of removable hard disk drives. we showed that the bits on the platters had similar life to bits on tape. of course, tape has the advantage of being effectively a d medium where disk is effectively a d medium. cloud storage amazon, wasabi and ceph gave useful marketing presentations. julian morley reported on stanford's transition from in-house tape to cloud storage, with important cost data. i reported previously on the economic modeling morley used to support this decision. cold storage us$/month aws gb . , write operations . , read operations . gb retrieval . early deletion charge: days azure gb . , write operations . , read operations gb retrieval . early deletion charge: days google gb . , operations . gb retrieval . early deletion charge: days at the register, tim anderson's archive storage comes to google cloud: will it give aws and azure the cold shoulder? provides a handy comparison of the leading cloud providers' pricing options for archiva storage, and concludes: this table, note, is an over-simplification. the pricing is complex; operations are broken down more precisely than read and write; the exact features vary; and there may be discounts for reserved storage. costs for data transfer within your cloud infrastructure may be less. the only way to get a true comparison is to specify your exact requirements (and whether the cloud provider can meet them), and work out the price for your particular case. dna i've been writing enthusiastically about the long-term potential, but skeptically about the medium-term future, of dna as an archival storage medium for more than seven years. i've always been impressed by the work of the microsoft/uw team in this field, and karin strauss and luis ceze's dna data storage and computation is no exception. it includes details of their demonstration of a complete write-to-read automated system (see also video), and discussion of techniques for performing "big data" computations on data stored in dna. anne fischer reported on darpa's research program in molecular informatics. one of its antecedents was a darpa workshop in . her presentation stressed the diverse range of small molecules that can be used as storage media. i wrote about one non-dna approach from harvard last year. in cost-reducing writing dna data i wrote about catalog's approach, assembling a strand from a library of short sequences of bases. it is a good idea, addressing one of the big deficiencies of dna as a storage medium, its write bandwidth. but devin leake's slides are short on detail, more of an elevator pitch for investment, they start by repeating the ludicrous idc projection of "bytes generated" and equating it to demand for storage, and in particular archival storage. if you're doing a company you need a much better idea than this about the market you're addressing. henry newman the good dr. pangloss loved henry newman's enthusiasm for g networking, but i'm a lot more skeptical. it is true that early g phones can demo nearly gb/s in very restricted coverage areas in some us cities. but g phones are going to be more expensive to buy, more expensive to use, have less battery life, overheat, have less consistent bandwidth and almost non-existent coverage. in return, you get better peak bandwidth, which most people don't use. customers are already discovering that their existing phone is "good enough". g is such a deal! the reason the carriers are building out g networks isn't phones, it is because they see a goldmine in the internet of things. but combine gb/s bandwidth with the iot's notoriously non-existent security, and you have a disaster the carriers simply cannot allow to happen. the iot has proliferated for two reasons, the things are very cheap and connecting them to the internet is unregulated, so isps cannot impose hassles. but connecting a thing to the g internet will require a data plan from the carrier, so they will be able to impose requirements, and thus costs. among the requirements will have to be that the things have ul certification, adequate security and support, including timely software updates for their presumably long connected life. it is precisely the lack of these expensive attributes that have made the iot so ubiquitous and such a security dumpster-fire! fixity two presentations discussed fixity checks. mark cooper reported on an effort to validate both the inventory and the checksums of part of lc's digital collection. the conclusion was that the automated parts were reliable, the human parts not so much: content on storage is correct, inventory is not content custodians working around system limitations, resulting in broken inventory records content in the digital storage system needs to be understood as potentially dynamic, in particular for presentation and access system needs to facilitate required actions in ways that are logged and versioned buzz hayes from google explained their recommended technique for performing fixity checks on data in google's cloud. they provide scripts for the two traditional approaches: read the data back and hash it, which at scale gets expensive in access and bandwidth charges. hash the data in the cloud that stores it, which involves trusting the cloud to actually perform the hash rather than simply remember the hash computed at ingest. i have yet to see a cloud api that implements the technique published by mehul shah et al twelve years ago, allowing the data owner to challenge the cloud provider with a nonce, thus forcing it to compute the hash of the nonce and the data at check time. see also my auditing the integrity of multiple replicas. blockchain sharmila bhatia reported on an initiative by nara to investigate the potential for blockchain to assist government records management which concluded: authenticity and integrity blockchain distributed ledger functionality presents a new way to ensure electronic systems provide electronic record authenticity / integrity. may not help with preservation or long term access and may make these issues more complicated. it is important to note that what nara means by "government records" is quite different from what is typically meant by "records", and the legislative framework under which they operate may make applying blockchain technology tricky. ben fino-radin and michelle lee pitched starling, a startup claiming: simplified & coordinated decentralized storage on the filecoin network their slides describe how the technology works, but give no idea of how much it would cost to use. just as with dna and other exotic media, the real issue is economic not technical. i wrote skeptically about the economics of the filecoin network in the four most expensive words in the english language and triumph of greed over arithmetic, comparing its possible pricing to amazon's s and s rrs. of course, the numbers would have looked much worse for filecoin had i compared it with wasabi's pricing. a final request to the organizers this is always a fascinating meeting. but, please, on the call for participation next year make it clear that anyone using projections for "data generated" in their slides as somehow relevant to "data storage" and archival data storage in particular will be hauled off stage by the hook. posted by david. at : am labels: big data, bitcoin, digital preservation, government information, iot, library of congress, long-lived media, storage media comments: david. said... in g security, bruce schneier points out that, even if the telcos were to enforce strict security for g-connected things, we are still screwed: "security vulnerabilities in the standards ­the protocols and software for g ­ensure that vulnerabilities will remain, regardless of who provides the hardware and software. these insecurities are a result of market forces that prioritize costs over security and of governments, including the united states, that want to preserve the option of surveillance in g networks. if the united states is serious about tackling the national security threats related to an insecure g network, it needs to rethink the extent to which it values corporate profits and government espionage over security." go read the whole post and weep. january , at : pm david. said... chris mellor reports that hard disk drive shipments fell % between and as ssd cannibalized everything except nearline. but note from fontana's graph above that capacity per drive increased faster than unit shipments decreased, so total bytes shipped still increased. january , at : am david. said... maybe volume shipments of hamr drives will happen this year. jim salter expresses optimisms despite the long history in hamr don’t hurt ’em—laser-assisted hard drives are coming in : "seagate has been trialing tb hamr drives with select customers for more than a year and claims that the trials have proved that its hamr drives are "plug and play replacements" for traditional cmr drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to." february , at : am david. said... kevin werbach takes the g hype to the woodshed in the 'race to g' is a myth: "telecommunications providers relentlessly extol the power of fifth-generation ( g) wireless technology. government officials and policy advocates fret that the winner of the " g race" will dominate the internet of the future, so america cannot afford to lose out. pundits declare that g will revolutionize the digital world. it all sounds very thrilling. unfortunately, the hype has gone too far. g systems will, over time, replace today's g, just as next year's iphone will improve on this year's . g networks offer significantly greater transmission capacity. however, despite all the hype, they won't represent a radical break from the current mobile experience." february , at : am david. said... an illustration of how broken the security of the things in the internet is, wang wei's a dozen vulnerabilities affect millions of bluetooth le powered devices reports that: "a team of cybersecurity researchers late last week disclosed the existence of potentially severe security vulnerabilities, collectively named 'sweyntooth,' affecting millions of bluetooth-enabled wireless smart devices worldwide—and worryingly, a few of which haven't yet been patched. all sweyntooth flaws basically reside in the way software development kits (sdks) used by multiple system-on-a-chip (soc) have implemented bluetooth low energy (ble) wireless communication technology—powering at least distinct products from several vendors including samsung, fitbit and xiaomi." a lot of the vulnerable products are medical devices ... march , at : pm david. said... karl bode piles on the g debunking with study shows us g is an over-hyped disappointment. he reports on g download speed is now faster than wifi in seven leading g countries, a new study that would be great if the country out of wasn't the us. bode writes: "the study, in one swipe, puts to bed claims that g is a "race" that the us is somehow winning through sheer ingenuity and industry coddling deregulation, and that g will be some sort of competitive panacea (high prices also hamstring it in this area). opensignal has a whole separate study on why g won't be supplanting wifi anytime soon. all of this runs, again, in pretty stark contrast to claims by companies like verizon that g is some widely available, near mystical technology that will revolutionize everything from smart vehicles to modern medicine. may , at : pm david. said... karl bode points out that even verizon tries to temper g enthusiasm after report clearly shows us g is slow, lame: "verizon's problem is that while the company will be deploying a lot of millimeter wave (mmwave) spectrum in key urban markets, that flavor of g lacks range and can't penetrate walls particularly well (for g conspiracy theorists, that means the technology is less likely to penetrate your skin and harm you, as well). for most users, what you see now with g is what you'll get for several years to come ... u.s. consumers, who already pay some of the highest prices in the world for verizon g service, will also need to pay $ extra a month (you know, just because), and shell out significant cash for early-adoption g devices that are fatter, more expensive, and have worse battery life than their current gear." may , at : pm post a comment newer post older post home subscribe to: post comments (atom) blog rules posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a creative commons attribution-share alike . united states license. off-topic or unsuitable comments will be deleted. dshr dshr in anwr recent comments full comments blog archive ►  ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ▼  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ▼  january ( ) regulating social media: part advertising is a bubble library of congress storage architecture meeting bitcoin's lightning network (updated) bunnie huang's betrusted project ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  february ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  november ( ) ►  october ( ) ►  september ( ) ►  july ( ) ►  june ( ) ►  february ( ) ►  ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  march ( ) ►  january ( ) ►  ( ) ►  december ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) lockss system has permission to collect, preserve, and serve this archival unit. simple theme. powered by blogger. news: stopping ransomware, china hates miners, ecuador cbdc history, nfts still too hard to buy – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu news: stopping ransomware, china hates miners, ecuador cbdc history, nfts still too hard to buy th june st june - by david gerard - comments. libra shrugged is in the smashwords father’s day promotion until july — and you can get it cheap with a coupon. tell your friends! tell your father! you can support my work by signing up for the patreon — $ or $ a month is like a few drinks down the pub while we rant about cryptos once a month. it really does help. [patreon] the patreon also has a $ /month corporate tier — the number is bigger on this tier, and will look more impressive on your analyst newsletter expense account. [patreon] and tell your friends and colleagues to sign up for this newsletter by email! [scroll down, or click here] i consult, and take freelance writing commissions — i have a huge one i need to get to once i can get out from under all this el salvador news … rats! cassie and cygnus, sisters and littermates.     bitcoin in the enterprise is ransomware a plague yet? insurance companies are already recoiling as ransomware attacks “skyrocket.” [ft, paywalled] but the most important thing in dealing with ransomware is to work on every part of the problem except the payment channels — because bitcoins are too precious to be hampered in any way. also, it’s not possible to do more than one thing at the same time, if it’s about a problem related to crypto. fix all the corporate networks in the us — then look at the payment channels. if ever. as the largest actual-dollar exchange in the us, coinbase directly makes money when ransomware victims buy the coins to pay their ransom. davide de cillo, coinbase’s head of product, is helpfully weighing in on twitter threads and explaining why you certainly shouldn’t do anything about the payment channels for ransomware. “thank god that didn’t ask for truly untraceable cash. can you imagine if we had to ban usd bills?” nice to see that the response “what about this other thing that isn’t the topic, huh” is fully industry-endorsed. [twitter, archive] others don’t go along with the crypto view of the problem. nicholas weaver: the ransomware problem is a bitcoin problem. [lawfare] jacob silverman: want to stop ransomware attacks? ban bitcoin and other cryptocurrencies. [new republic] j. p. koning has a more creative solution: shoot the hostage. instead of banning crypto to stop ransoms … punish victims who pay up! [aier] although koning does point out why ransomware gangs like bitcoin — it’s the censorship resistance. this doesn’t really make the case against taking out the payment channels. [moneyness] governments aren’t taking the crypto industry line either. the us will be giving ransomware attacks similar priority to terrorism. jake sullivan, president biden’s national security adviser, said that ransomware needs to be a “priority” for nato and the g nations. and gchq says that ransomware attacks are now a bigger cyber threat to the uk than hostile states. [reuters; independent; ft, paywalled] the future of ransomware may be cyber warfare rather than obtaining bitcoins. notpetya is already thought to have been primarily a russian attack on ukrainian interests, and only secondarily about acquiring bitcoins. [war on the rocks]   wait for ransomware, and then sell assets as beyond economical recovery. structural equivalent of writing off a car and all you have to do is park it somewhere you know it'll get phished. it's better than burning the place down for the insurance because you can sell the building. — crossestman (@crossestman) june ,   baby’s on fire, but not so much in china imagine bitcoin after the apocalypse, when we’re finally free of computers: “i guess … three” “oh, well done! you win the . bitcoins!” china is thoroughly sick of the crypto miners. the province of qinghai is not permitting new mining projects, and will be closing down existing mining operations. [coindesk] yunnan province was reported to be kicking the miners out. but what it’s actually doing is forcing the miners to connect to the official grid, and not just strike deals for cheaper electricity with power plants directly, who then don’t pay the state their cut. the yunnan energy bureau is enforcing this with inspections of mining operations. paying official prices may, of course, effectively push the miners out. [fortune; the block] hashcow will no longer sell mining rigs in china. sichuan duo technology put its machines up for sale on wechat. btc.top, which does % of all bitcoin mining, is suspending operations in china, and plans to mine mainly in north america. [time] mining rigs are for sale at – % off. chinese miners are looking to set up elsewhere. some are looking to kazakhstan. [wired] some have an eye on texas — a state not entirely famous for its robust grid and ability to keep the lights on in bad weather. [cnbc] crypto miners still mean software developers can’t have nice things — cloud computing service docker is discontinuing their free tier specifically because of abusive miners, as of june. “in the last few months we have seen a massive growth in the number of bad actors who are taking advantage of this service with the goal of abusing it for crypto mining … in april we saw the number of build hours spike x our usual load and by the end of the month we had already deactivated ~ , accounts due to mining abuse. the following week we had another ~ miners spin up.” [docker] are crypto miners money transmission businesses? well, fincen explicitly says that creating fresh coins and distributing them to your pool is not. but miners also process transactions — and the trouble is that they pick and choose which ones they process. one mining pool, marathon, was blocking transmissions that ofac didn’t like — but stopped because of …  complaints from pool participants. nicholas weaver points out that this completely gives the game away: miners have always been able to comply with money transmission rules, they just got away with not doing it. [the block; lawfare] why you should care about bitcoin if you care about the planet: “bitcoin is bringing dirty power plants out of retirement. earthjustice is fighting this new trend in order to put an end to fossil fuels once and for all.” [earthjustice] shocked to see that the timeline for ethereum moving to eth and getting off proof-of-work mining has been put back to late … about months from now. this is mostly from delays in getting sharding to work properly. vitalik buterin says that this is because the ethereum team isn’t working well together. [tokenist] detect the malware — or become the malware? norton antivirus will soon have a function to mine ethereum on the user’s graphics card. nobody knows why they thought this was in any way a good idea. [press release] chia farming malware may have been spotted in the wild, infecting qnap file servers. [qnap forums]   cops accidentally shut down operation actually harmful to society https://t.co/ac bgvswzz — tom hatfield (@wordmercenary) may ,   regulatory clarity bitcoin was invented as a new form of money, free of government coercion! the trouble there is when it interacts with the government-regulated world of actual money. governments are stupid and incompetent in a great many ways — i mean, i live in the uk. my natural inclination is that governments can bugger off out of my face. which is why it annoys me about bitcoin that it keeps making the case for statism. i had someone call me out on this: my books are way too much like advocacy of the existing system. and it’s true — because both bitcoin and facebook came up with ideas that were even worse than what we have now. getting your money out past awful governments is absolutely a crypto use case. i remain sceptical of many of the people advocating it, because a lot are just scammy crooks — or at best, craven number-go-up guys pretending to care about human rights. the biden administration is proposing to collect data on foreign crypto investors active in the us, to “bolster international cooperation” and crack down on tax evasion. [bloomberg] uk legacy fiat banksters can’t cope with the demands of the burgeoning crypto economy, and they’re blocking payments to exchanges, claiming “high levels of suspected financial crime” or some such nonsense. this time it’s barclays, but also the “challenger” fintech banks monzo and starling. reddit also reports revolut blocking transactions. [telegraph; reddit] why are crypto business not getting registered with the fca in the uk?” a significantly high number of businesses are not meeting the required standards under the money laundering regulations.” the deadline is now march . [fca] cftc commissioner dan berkovitz is unhappy with defi — specifically, that it’s an unregulated market for the purpose of trading derivatives of commodities. [cftc] thailand has banned crypto exchanges from trading “gimmick” meme tokens, nfts, and tokens issued by “digital asset exchanges or related persons.” [bangkok post] the sec has updated its list of unregistered soliciting entities, who use questionable information to solicit investment. lots of crypto firms here. [sec press release; unregistered soliciting entities, pdf]   i engage in complex trading strategies; in real terms i’m consistently down but my imaginary gains are always limitless. — josh cincinnati (@acityinohio) may ,   central banking, not on the blockchain the bank of england discussion paper “new forms of digital money” is about libra/diem-style coins and central bank digital currencies (cbdcs). it’s not at all about crypto trading coins like tether, though that’s what the press has latched onto, all citing jemima kelly in the financial times laughing the tether reserves pie charts out of the room. anyway, the bank looks forward to your comments — get them in by september. [bank of england; ft, paywalled] in , ecuador tried to do a cbdc, based on us dollars: sistema de dinero electrónico. it failed pretty hard. i blogged about it here, and wrote it up in chapter of libra shrugged. now there’s a detailed history in the latin american journal of central banking, by andrés arauza, rodney garratt and diego f. ramos. [science direct] frances coppola: “i’ve written up my analysis of the bis’s proposed capital regulations for cryptocurrencies and stablecoins. with a primer on bank capital and reserves, since people still don’t seem to know what these are and how they differ from each other.” this is about banks’ own capital, not liabilities to customers. [blog post]   i have previously simplified that down to: "if someone says 'blockchain solves x', substitute database for blockchain. if it still makes sense, use a database. if it doesn't make sense, blockchain doesn't solve it." — myz lilith (@myzlilith) may ,   sales receipt fan fiction protos: “the nft market has imploded over the past month, with sales in every single category almost entirely drying up.” this was based on data from nonfungible.com — which they say protos misinterpreted. amy castor, in artnet, concurs wth nonfungible.com — the nft market is down, but not to the extent protos painted it.  [protos; nonfungible.com; artnet, paywalled] bbc: buying a pink nft cat was a crypto nightmare — a normal person tries to buy an nft, and discovers that, after years, crypto is still basically unusable garbage for normal humans. with quotes from me. crypto still has the usability of a bunch of wires on a lab bench. crypto pumpers really hate having that pointed out, and will always blame the victim. in thirty years, the crypto bros will be saying “it’s early days, give it thirty years, time will tell.” [bbc] techmonitor: ‘the apotheosis of ownership’: what is the future of nfts? with quotes from me. [techmonitor]   i have an idea for a data structure, hear me out. a linked list where every node contains a hash of all the data in the nodes behind it, and every time you want to add a new node, you need about . other computers to say ok and consume the power equivalent of a small nation — Ólafur waage (@olafurw) may ,   things happen bbc on crypto day trading: “this is the crack cocaine of gambling because it is so fast. it’s / . it’s on your phone, your laptop, it’s in your bedroom.” [bbc] someone just told me what ibm blockchain was charging for “managed blockchain”: on the order of $ per node per month. “i was speechless after receiving the quote.” ibm didn’t quite make the case against postgresql. this is a hilarious story from the s, as told in : how the government set up a fake bank to launder drug money. (podcast plus transcript.) [public radio east] beginning august, google will no longer accept ads for defi protocols, decentralised exchanges, icos, crypto loans or similar financial products. [google support] wealth manager ruffer handles large investments for “institutions, wealthy individuals and charities.” ruffer got into bitcoin in november , and assured the crypto world that they were in this for the long haul. it turns out ruffer sold up in april , for a tidy profit. the “speculative frenzy” was making them nervous. [ft, paywalled]   i wish people would stop saying “crypto” when they mean “cybercrypto”. words have meaning, people. — matt blaze (@mattblaze) june ,   hot takes new bitfinex’ed blog post just dropped: tether is setting a new standard for transparency, that is untethered from facts. a catch-up on the tether situation, as the rest of the world finally starts noticing there might be a problem here. [medium] there’s another podcast series about the quadriga crypto exchange and its allegedly-deceased founder, gerry cotten. this one is “exit scam”. for once, i’m not in this one. [exit scam] cas piancey: michael saylor of microstrategy has always been like this — what actually went down at microstrategy, and then at the sec, from to . [medium] the marshall islands sov deconstructed, by imf researcher sonja davidson — sonja was one of the people who helped look over the cbdc chapter in libra shrugged. [global fintech intelligencer] requiem for a bright idea ( ) — a post-mortem on david chaum’s digicash, a predecessor of bitcoin. [forbes, ] at last, a worthy successor to brazil.txt, but this time with grown-ups — the founder of zebi, the “indian eth,” pumps, and dumps. [reddit]   help i'm in an abusive relationship pic.twitter.com/vie rliewr — cryptofungus (@crypt fungus) june ,   living on video i’ll talk to almost any podcast or media in good faith, at the drop of a hat. (email me!) but i keep being asked to talk about crypto on clubhouse. unfortunately, clubhouse isn’t on android yet — and my a is apparently too boomer to run the clubhouse for android preview. so if you want me to debate cryptos on clubhouse, you’ll need to buy me an unlocked iphone or galaxy s . demonstrate your proof-of-stake on this one. (i am being silenced by bad faith cultural marxist wokeists not buying me a new top-end phone.) when the music stops is aviv milner’s new skeptical podcast about cryptocurrency. i went on to talk about chia and the various disk-space coins. [anchor.fm] ntd: china’s robinhoods eye us market with cryptos — with me, : on. not very crypto news, this is more about the daytrading, i.e., gambling market, with chinese stock daytrading companies looking to get into the us and offer cryptos — which they can’t do in china. because china hates cryptos. [youtube] i also did a pile of press on el salvador — it’s disconcerting to discover that my blog posts and foreign policy article seem to be the primary sources of information on the scheme. but those will be in my next el salvador post! as well as bukele, there’s the weird strike and bitcoin beach factions. the el salvador bitcoin caper would make a hilarious slapstick comedy about crook vs. crook vs. crook — if it wasn’t real life, with six and a half million victims.   already loving this book “attack of the foot blockchain” by @davidgerard pic.twitter.com/ j kcneqb — bichael (@mikelewisatx) june ,   your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbank of englandbarclaysbisbitcoinbitfinexedblockchainbtc.topcbdccftcchiachinaclubhousecoinbasecryptokittiesdan berkovitzdavid chaumdavide de cillodefidigicashdockerearthjusticeecuadorel salvadorethereumfcagchqgerald cottengooglehashcowibmlinksmarathonmarshall islandsmichael saylormicrostrategyminingmonzonftnortonofacpodcastqinghaiqnapquadrigacxransomwareruffersdesecsichuan duo technologysovstablecoinstarlingthailandunited kingdomunited stateswhen the music stopsyunnanzebi post navigation previous article foreign policy: el salvador is printing money with bitcoin next article bitcoin myths: immutability, decentralisation, and the cult of “ million” comments on “news: stopping ransomware, china hates miners, ecuador cbdc history, nfts still too hard to buy” ingvar says: th june at : am fuel for future write-ups: https://irony- .medium.com/the-melting-of-iron- b e reply allan says: st june at : am the link to frances coppola’s blog post does not work. need to add www subdomain, here is working link: https://www.coppolacomment.com/ / /bank-capital-and-cryptocurrencies.html reply david gerard says: st june at : am well, it worked before! fixed 🙂 reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. terry's worklog – on my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) skip to content terry's worklog on my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) menu close home about me marcedit homepage github page privacy policy marcedit update round-up a handful of updates have been posted related to marcedit . since the program cam out of beta.  these have been mostly bug fixes and small enhancements.  here’s the full list: bug fix: oclc search – multiple terms would result in an error if ‘or’ was used with specific search indexes. fixed: / enhancement: oclc… continue reading marcedit update round-up published july , categorized as marcedit marcedit . .x/marcedit mac . .x: coming out of beta marcedit . /marcedit mac . is officially out of beta.  it has been my primary version of marcedit for about months and is where all new development has taken place since dec. .  because there are significant changes (including framework supported) – marcedit . / . are not in-place upgrades.  previous versions of marcedit can be installed… continue reading marcedit . .x/marcedit mac . .x: coming out of beta published june , categorized as marcedit exploring bibframe workflows in marcedit update: / / : i uploaded a video with sound that demonstrates the process.  you can find it here: during this past year while working on marcedit . .x/ . .x, i’ve been giving some thought to how i might be able to facilitate some workflows to allow users to move data to and from bibframe.  while the tools has… continue reading exploring bibframe workflows in marcedit published june , categorized as bibframe, marcedit thoughts on nacos proposed process on updating cjk records i would like to take a few minutes and share my thoughts about an updated best practice recently posted by the pcc and naco related to an update on cjk records. the update is found here: https://www.loc.gov/aba/pcc/naco/cjk/cjk-best-practice-ncr.docx. i’m not certain if this is active or a simply a proposal, but i’ve been having a number… continue reading thoughts on nacos proposed process on updating cjk records published april , categorized as cataloging, marcedit marcedit . update changelog: https://marcedit.reeset.net/software/update .txt highlights preview changes one of the most requested features over the years has been the ability to preview changes prior to running them.  as of . . – a new preview option has been added to many of the global editing tools in the marceditor.  currently, you will find the preview option attached to… continue reading marcedit . update published april , categorized as marcedit how do i generate marc authority records from the homosaurus vocabulary? step by step instructions here: https://youtu.be/fjsdqi pzpq ok, so last week, i got an interesting question on the listserv where a user asked specifically about generating marc records for use in one’s ils system from a jsonld vocabulary.  in this case, the vocabulary in question as homosaurus (homosaurus vocabulary site) – and the questioner was specifically… continue reading how do i generate marc authority records from the homosaurus vocabulary? published april , categorized as marcedit marcedit: state of the community * - * sigh – original title said - .  obviously, this is for this past year (jan. -dec. , ).   per usual, i wanted to take a couple minutes and look at the state of the marcedit project. this is something that i try to do once a year to gauge the current health of the community,… continue reading marcedit: state of the community * - published march , categorized as marcedit, uncategorized marcedit . .x/ . .x (beta) updates versions are available at: https://marcedit.reeset.net/downloads information about the changes: . . change log: https://marcedit.reeset.net/software/update .txt . . change log: https://marcedit.reeset.net/software/update .txt if you are using .x – this will prompt as normal for update. . .x is the beta build, please be aware i expect to be releasing updates to this build weekly and also expect to find some issues.… continue reading marcedit . .x/ . .x (beta) updates published february , categorized as marcedit marcedit . .x/macos . .x timelines i sent this to the marcedit listserv to provide info about my thoughts around timelines related to the beta and release.  here’s the info. dear all, as we are getting close to feb. (when i’ll make the . beta build available for testing) – i wanted to provide information about the update process going… continue reading marcedit . .x/macos . .x timelines published january , categorized as marcedit marcedit . change/bug fix list * updated; / change: allow os to manage supported supported security protocol types. change: remove com.sun dependency related to dns and httpserver change: changed appdata path change: first install automatically imports settings from marcedit . - .x change: field count – simplify ui (consolidate elements) change: windows — update help urls to oclc change: generate fast… continue reading marcedit . change/bug fix list published january , categorized as marcedit posts navigation page … page older posts search… terry's worklog proudly powered by wordpress. dark mode: jacques maritain - wikipedia jacques maritain from wikipedia, the free encyclopedia jump to navigation jump to search french philosopher this article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. please help to improve this article by introducing more precise citations. (november ) (learn how and when to remove this template message) jacques maritain maritain in the s born ( - - ) november paris, france died april ( - - ) (aged  ) toulouse, france alma mater university of paris spouse(s) raïssa maritain ​ ​ (m.  ; died  )​ era th-century philosophy region western philosophy school existential thomism main interests philosophy of religion political theory philosophy of science metaphysics influences thomas aquinas aristotle charles baudelaire Étienne gilson charles maurras john of st. thomas influenced mortimer j. adler yves congar jean daujat john haddox ivan illich john f. x. knasas emmanuel mounier[ ] yves simon part of a series on christian democracy organizations list of christian democratic parties centrist democrat international christian democrat organization of america european people's party european christian political movement konrad adenauer foundation conservative christian fellowship centre for european studies center for public justice ideas catholic social teaching christian corporatism christian ethics communitarianism consistent life ethic cultural mandate culture of life dignity of labor distributism gremialismo just war theory liberation theology neo-calvinism neo-scholasticism popolarismo progressive conservatism social conservatism social democracy social gospel social market economy solidarity sphere sovereignty stewardship subsidiarity welfare / welfare state documents rerum novarum kuyper's stone lectures on calvinism graves de communi re quadragesimo anno laborem exercens sollicitudo rei socialis centesimus annus laudato si' people guillaume groen van prinsterer herman dooyeweerd konrad adenauer giulio andreotti alcide de gasperi eduardo frei montalva keith joseph wilhelm emmanuel von ketteler helmut kohl abraham kuyper pope leo xiii jacques maritain pope pius xi robert schuman luigi sturzo josé maría arizmendiarrieta related topics christian left christian anarchism christian communism christian socialism christian right christian libertarianism christian nationalism theoconservatism liberal democracy religious democracy buddhist islamic jewish mormon social democracy ethical socialism politics portal  christianity portal v t e part of a series on catholic philosophy   aquinas, scotus, and ockham ethics cardinal virtues just price just war probabilism natural law personalism social teaching virtue ethics schools augustinianism cartesianism molinism occamism salamanca scholasticism neo-scholasticism scotism thomism philosophers ancient ambrose athanasius the great augustine of hippo clement of alexandria cyprian of carthage cyril of alexandria gregory of nyssa irenaeus of lyons jerome john chrysostom john of damascus justin martyr origen paul the apostle tertullian postclassical pseudo-dionysius boethius isidore of seville scotus eriugena bede anselm of canterbury hildegard of bingen peter abelard symeon the new theologian bernard of clairvaux hugh of saint victor thomas aquinas benedict of nursia pope gregory i peter lombard bonaventure albertus magnus duns scotus roger bacon giles of rome james of viterbo giambattista vico gregory of rimini william of ockham catherine of siena paul of venice modern baltasar gracián erasmus of rotterdam thomas cajetan nicholas of cusa luis de molina teresa of Ávila thomas more francis de sales francisco de vitoria domingo de soto martín de azpilcueta tomás de mercado antoine arnauld rené descartes robert bellarmine ignacy krasicki hugo kołłątaj françois fénelon alphonsus liguori nicolas malebranche blaise pascal francisco suárez giovanni botero félicité de la mennais antonio rosmini john henry newman contemporary pope benedict xvi pope john paul ii g. e. m. anscombe hans urs von balthasar maurice blondel g. k. chesterton yves congar henri de lubac john finnis reginald garrigou-lagrange Étienne gilson rené girard nicolás gómez dávila romano guardini john haldane dietrich von hildebrand bernard lonergan marshall mcluhan alasdair macintyre gabriel marcel jean-luc marion jacques maritain emmanuel mounier josef pieper karl rahner edith stein charles taylor  catholicism portal  philosophy portal v t e jacques maritain (french: [maʁitɛ̃]; november  – april ) was a french catholic philosopher. raised protestant, he was agnostic before converting to catholicism in . an author of more than books, he helped to revive thomas aquinas for modern times, and was influential in the development and drafting of the universal declaration of human rights. pope paul vi presented his "message to men of thought and of science" at the close of vatican ii to maritain, his long-time friend and mentor. the same pope had seriously considered making him a lay cardinal, but maritain rejected it.[ ] maritain's interest and works spanned many aspects of philosophy, including aesthetics, political theory, philosophy of science, metaphysics, the nature of education, liturgy and ecclesiology. contents life sainthood cause work metaphysics and epistemology ethics political theory . integral humanism . criticism sayings writings . significant works in english . other works in english . original works in french see also notes references further reading external links life[edit] maritain was born in paris, the son of paul maritain, who was a lawyer, and his wife geneviève favre, the daughter of jules favre, and was reared in a liberal protestant milieu. he was sent to the lycée henri-iv. later, he attended the sorbonne, studying the natural sciences: chemistry, biology and physics. at the sorbonne, he met raïssa oumançoff, a russian jewish émigré. they married in . a noted poet and mystic, she participated as his intellectual partner in his search for truth. raïssa's sister, vera oumançoff, lived with jacques and raïssa for almost all their married life. at the sorbonne, jacques and raïssa soon became disenchanted with scientism, which could not, in their view, address the larger existential issues of life. in , in light of this disillusionment, they made a pact to commit suicide together if they could not discover some deeper meaning to life within a year. they were spared from following through on this because, at the urging of charles péguy, they attended the lectures of henri bergson at the collège de france. bergson's critique of scientism dissolved their intellectual despair and instilled in them "the sense of the absolute." then, through the influence of léon bloy, they converted to the catholic faith in .[ ] in the fall of the maritains moved to heidelberg, where jacques studied biology under hans driesch. hans driesch's theory of neo-vitalism attracted jacques because of its affinity with henri bergson. during this time, raïssa fell ill, and during her convalescence, their spiritual advisor, a dominican friar named humbert clérissac, introduced her to the writings of thomas aquinas. she read them with enthusiasm and, in turn, exhorted her husband to examine the saint's writings. in thomas, maritain found a number of insights and ideas that he had believed all along. he wrote: thenceforth, in affirming to myself, without chicanery or diminution, the authentic value of the reality of our human instruments of knowledge, i was already a thomist without knowing it ... when several months later i came to the summa theologiae, i would construct no impediment to its luminous flood. from the angelic doctor (the honorary title of aquinas), he was led to "the philosopher", as aquinas called aristotle. still later, to further his intellectual development, he read the neo-thomists. beginning in , maritain taught at the collège stanislas. he later moved to the institut catholique de paris. for the – academic year, he taught at the petit séminaire de versailles. in maritain and Étienne gilson received honorary doctorates in philosophy from the pontifical university of saint thomas aquinas, angelicum.[ ] in , he gave his first lectures in north america in toronto at the pontifical institute of mediaeval studies. he also taught at columbia university; at the committee on social thought, university of chicago; at the university of notre dame, and at princeton university. from to , he was the french ambassador to the holy see. afterwards, he returned to princeton university where he achieved the "elysian status" (as he put it) of a professor emeritus in . raïssa maritain died in . after her death, jacques published her journal under the title "raïssa's journal." for several years maritain was an honorary chairman of the congress for cultural freedom, appearing as a keynote speaker at its conference in berlin.[ ] from , maritain lived with the little brothers of jesus in toulouse, france. he had an influence on the order since its foundation in . he became a little brother in .[ ] in a interview published by the commonweal magazine, they asked if he was a freemason. maritain replied: that question offends me, for i should have a horror of belonging to freemasonry. so much the worse for well-intentioned people whose anxiety and need for explanations would have been satisfied by believing me to be one.[ ] tomb of raïssa and jacques maritain jacques and raïssa maritain are buried in the cemetery of kolbsheim, a little french village in alsace where he had spent many summers at the estate of his friends, antoinette and alexander grunelius.[ ] sainthood cause[edit] a cause for beatification of him and his wife raïssa is being planned.[ ] work[edit] the foundation of maritain's thought is aristotle, aquinas, and the thomistic commentators, especially john of st. thomas. he is eclectic in his use of these sources. maritain's philosophy is based on evidence accrued by the senses and acquired by an understanding of first principles. maritain defended philosophy as a science against those who would degrade it and promoted philosophy as the "queen of sciences". in , jacques maritain completed his first contribution to modern philosophy, a -page article titled, "reason and modern science" published in revue de philosophie (june issue). in it, he warned that science was becoming a divinity, its methodology usurping the role of reason and philosophy. science was supplanting the humanities in importance.[ ] in , a committee of french bishops commissioned jacques to write a series of textbooks to be used in catholic colleges and seminaries. he wrote and completed only one of these projects, titled elements de philosophie (introduction of philosophy) in . it has been a standard text ever since in many catholic seminaries. he wrote in his introduction: if the philosophy of aristotle, as revived and enriched by thomas aquinas and his school, may rightly be called the christian philosophy, both because the church is never weary of putting it forward as the only true philosophy and because it harmonizes perfectly with the truths of faith, nevertheless it is proposed here for the reader's acceptance not because it is christian, but because it is demonstrably true. this agreement between a philosophic system founded by a pagan and the dogmas of revelation is no doubt an external sign, an extra-philosophic guarantee of its truth; but from its own rational evidence, that it derives its authority as a philosophy during the second world war, jacques maritain protested the policies of the vichy government while teaching at the pontifical institute for medieval studies in canada. "moving to new york, maritain became deeply involved in rescue activities, seeking to bring persecuted and threatened academics, many of them jews, to america. he was instrumental in founding the École libre des hautes Études, a kind of university in exile that was, at the same time, the center of gaullist resistance in the united states". after the war, in a papal audience on july , he tried unsuccessfully to have pope pius xii officially denounce anti-semitism.[ ] many of his american papers are held by the university of notre dame, which established the jacques maritain center in . the cercle d'etudes jacques & raïssa maritain is an association founded by the philosopher himself in in kolbsheim (near strasbourg, france), where the couple is also buried. the purpose of these centers is to encourage study and research of maritain's thought and expand upon them. it is also absorbed in translating and editing his writings. metaphysics and epistemology[edit] maritain's philosophy is based on the view that metaphysics is prior to epistemology. being is first apprehended implicitly in sense experience, and is known in two ways. first, being is known reflexively by abstraction from sense experience. one experiences a particular being, e.g. a cup, a dog, etc. and through reflexion ("bending back") on the judgement, e.g. "this is a dog", one recognizes that the object in question is an existent. second, in light of attaining being reflexively through apprehension of sense experience one may arrive at what maritain calls "an intuition of being". for maritain this is the point of departure for metaphysics; without the intuition of being one cannot be a metaphysician at all. the intuition of being involves rising to the apprehension of ens secundum quod est ens (being insofar as it is a being). in existence and the existent he explains: "it is being, attained or perceived at the summit of an abstractive intellection, of an eidetic or intensive visualization which owes its purity and power of illumination only to the fact that the intellect, one day, was stirred to its depths and trans-illuminated by the impact of the act of existing apprehended in things, and because it was quickened to the point of receiving this act, or hearkening to it, within itself, in the intelligible and super-intelligible integrity of the tone particular to it." (p. ) in view of this priority given to metaphysics, maritain advocates an epistemology he calls "critical realism". maritain's epistemology is not "critical" in kant's sense, which held that one could only know anything after undertaking a thorough critique of one's cognitive abilities. rather, it is critical in the sense that it is not a naive or non-philosophical realism, but one that is defended by way of reason. against kant's critical project maritain argues that epistemology is reflexive; you can only defend a theory of knowledge in light of knowledge you have already attained. consequently, the critical question is not the question of modern philosophy – how do we pass from what is perceived to what is. rather, "since the mind, from the very start, reveals itself as warranted in its certitude by things and measured by an esse[clarification needed] independent of itself, how are we to judge if, how, on what conditions, and to what extent it is so both in principle and in the various moments of knowledge?" in contrast idealism inevitably ends up in contradiction, since it does not recognize the universal scope of the first principles of identity, contradiction, and finality. these become merely laws of thought or language, but not of being, which opens the way to contradictions being instantiated in reality. maritain's metaphysics ascends from this account of being to a critique of the philosophical aspects of modern science, through analogy to an account of the existence and nature of god as it is known philosophically and through mystical experience. ethics[edit] maritain was a strong defender of a natural law ethics. he viewed ethical norms as being rooted in human nature. for maritain the natural law is known primarily, not through philosophical argument and demonstration, but rather through "connaturality". connatural knowledge is a kind of knowledge by acquaintance. we know the natural law through our direct acquaintance with it in our human experience. of central importance, is maritain's argument that natural rights are rooted in the natural law. this was key to his involvement in the drafting of the un's universal declaration of human rights. another important aspect of his ethics was his insistence upon the need for moral philosophy to be conducted in a theological context. while a christian could engage in speculative thought about nature or metaphysics in a purely rational manner and develop an adequate philosophy of nature of metaphysics, this is not possible with ethics. moral philosophy must address the actual state of the human person, and this is a person in a state of grace. thus, "moral philosophy adequately considered" must take into account properly theological truths. it would be impossible, for instance, to develop an adequate moral philosophy without giving consideration to properly theological facts such as original sin and the supernatural end of the human person in beatitude. any moral philosophy that does not take into account these realities that are only known through faith would be fundamentally incomplete.[ ] political theory[edit] maritain corresponded with, and was a friend of[ ] the american radical community organizer saul alinsky[ ] and french prime minister robert schuman.[ ] in the study the radical vision of saul alinsky, author p. david finks noted that "for years jacques maritain had spoken approvingly to montini of the democratic community organizations built by saul alinsky". accordingly, in maritain arranged for a series of meetings between alinsky and archbishop montini in milan. before the meetings, maritain had written to alinsky: "the new cardinal was reading saul’s books and would contact him soon".[ ] in an interview from , pope francis praised maritain among a small list of french liberal thinkers.[ ] integral humanism[edit] part of a series on christian democracy organizations list of christian democratic parties centrist democrat international christian democrat organization of america european people's party european christian political movement konrad adenauer foundation conservative christian fellowship centre for european studies center for public justice ideas catholic social teaching christian corporatism christian ethics communitarianism consistent life ethic cultural mandate culture of life dignity of labor distributism gremialismo just war theory liberation theology neo-calvinism neo-scholasticism popolarismo progressive conservatism social conservatism social democracy social gospel social market economy solidarity sphere sovereignty stewardship subsidiarity welfare / welfare state documents rerum novarum kuyper's stone lectures on calvinism graves de communi re quadragesimo anno laborem exercens sollicitudo rei socialis centesimus annus laudato si' people guillaume groen van prinsterer herman dooyeweerd konrad adenauer giulio andreotti alcide de gasperi eduardo frei montalva keith joseph wilhelm emmanuel von ketteler helmut kohl abraham kuyper pope leo xiii jacques maritain pope pius xi robert schuman luigi sturzo josé maría arizmendiarrieta related topics christian left christian anarchism christian communism christian socialism christian right christian libertarianism christian nationalism theoconservatism liberal democracy religious democracy buddhist islamic jewish mormon social democracy ethical socialism politics portal  christianity portal v t e maritain advocated what he called "integral humanism" (or "integral christian humanism").[ ] he argued that secular forms of humanism were inevitably anti-human in that they refused to recognize the whole person. once the spiritual dimension of human nature is rejected, we no longer have an integral, but merely partial humanism, one which rejects a fundamental aspect of the human person. accordingly, in integral humanism he explores the prospects for a new christendom, rooted in his philosophical pluralism, in order to find ways christianity could inform political discourse and policy in a pluralistic age. in this account he develops a theory of cooperation, to show how people of different intellectual positions can nevertheless cooperate to achieve common practical aims. maritain's political theory was extremely influential, and was a primary source behind the christian democratic movement. criticism[edit] major criticisms of maritain have included: santiago ramírez[who?] argued that maritain's moral philosophy adequately considered could not be distinguished in any meaningful way from moral theology as such.[ ] tracey rowland, a theologian at the university of notre dame (australia) has argued that the lack of a fully developed philosophy of culture in maritain and others (notably rahner) was responsible for an inadequate notion of culture in the documents of vatican ii and thereby for much of the misapplication of the conciliar texts in the life of the church following the council.[ ] maritain's political theory has been criticized for a democratic pluralism that appeals to something very similar to the later liberal philosopher john rawls' conception of an overlapping consensus of reasonable views. it is argued that such a view illegitimately presupposes the necessity of pluralistic conceptions of the human good.[ ] sayings[edit] "vae mihi si non thomistizavero" [woe to me if i do not thomisticize].[ ] "je n’adore que dieu" [i adore only god]. "the artist pours out his creative spirit into a work; the philosopher measures his knowing spirit by the real." "i do not know if saul alinsky knows god. but i assure you that god knows saul alinsky." "we do not need a truth to serve us, we need a truth that we can serve" writings[edit] significant works in english[edit] introduction to philosophy, christian classics, inc., westminster, md, st. , . the degrees of knowledge, orig. integral humanism, orig. an introduction to logic ( ) a preface to metaphysics ( ) ( ) education at the crossroads, engl. the person and the common good, fr. art and scholasticism with other essays, sheed and ward, london, existence and the existent, (fr. ) trans. by lewis galantiere and gerald b. phelan, image books division of doubleday & co., inc., garden city, ny, , image book, . isbn  - - - - philosophy of nature ( ) the range of reason, engl. approaches to god, engl. creative intuition in art and poetry, engl. man and the state, (orig.) university of chicago press, chicago, ill, . a preface to metaphysics, engl. god and the permission of evil, trans. joseph w. evans, the bruce publishing company, milwaukee, wi, (orig. ). moral philosophy, the peasant of the garonne, an old layman questions himself about the present time, trans. michael cuddihy and elizabeth hughes, holt, rinehart and winston, ny, ; orig. . the education of man, the educational philosophy of jacques maritain., ed. d./i. gallagher, notre dame/ind. other works in english[edit] religion and culture ( ) the things that are not caesar's ( ) theonas; conversations of a sage ( ) freedom in the modern world ( ) true humanism ( ) (integral humanism, ) a christian looks at the jewish question ( ) the twilight of civilization ( ) scholasticism and politics, new york science and wisdom ( ) religion and the modern world ( ) france, my country through the disaster ( ) the living thoughts of st. paul ( ) france, my country, through the disaster ( ) ransoming the time ( ) christian humanism ( ) saint thomas and the problem of evil, milwaukee ; essays in thomism, new york ; the rights of man and natural law ( ) prayer and intelligence ( ) give john a sword ( ) the dream of descartes ( ) christianity and democracy ( ) messages – , new york ; a faith to live by ( ) the person and the common good ( ) art & faith (with jean cocteau ) the pluralist principle in democracy ( ) creative intuition in art and history ( ) an essay on christian philosophy ( ) the situation of poetry with raïssa maritain, ) bergsonian philosophy ( ) reflections on america ( ) st. thomas aquinas ( ) the degrees of knowledge ( ) the sin of the angel: an essay on a re-interpretation of some thomistic positions ( ) liturgy and contemplation ( ) the responsibility of the artist ( ) on the use of philosophy ( ) god and the permission of evil ( ) challenges and renewals, ed. j.w. evans/l.r. ward, notre dame/ind. on the grace and humanity of jesus ( ) on the church of christ: the person of the church and her personnel ( ) notebooks ( ) natural law: reflections on theory and practice (ed. with introductions and notes, by william sweet), st. augustine's press [distributed by university of chicago press], ; second printing, corrected, . original works in french[edit] la philosophie bergsonienne, ( ) eléments de philosophie, volumes, paris / art et scolastique, théonas ou les entretiens d’un sage et de deux philosophes sur diverses matières inégalement actuelles, paris, nouvelle librairie nationale, antimoderne, paris, Édition de la revue des jeunes, réflexions sur l’intelligence et sur sa vie propre, paris, nouvelle librairie nationale, . trois réformateurs : luther, descartes, rousseau, avec six portraits, paris [plon], réponse à jean cocteau, une opinion sur charles maurras et le devoir des catholiques, paris [plon], primauté du spirituel, pourquoi rome a parlé (coll.), paris, spes, quelques pages sur léon bloy, paris clairvoyance de rome (coll.), paris, spes, le docteur angélique, paris, paul hartmann, religion et culture, paris, desclée de brouwer, ( ) le thomisme et la civilisation, distinguer pour unir ou les degrés du savoir, paris le songe de descartes, suivi de quelques essais, paris de la philosophie chrétienne, paris, desclée de brouwer, du régime temporel et de la liberté, paris, ddb, sept leçons sur l'être et les premiers principes de la raison spéculative, paris frontières de la poésie et autres essais, paris la philosophie de la nature, essai critique sur ses frontières et son objet, paris ( ) lettre sur l’indépendance, paris, desclée de brouwer, . science et sagesse, paris humanisme intégral. problèmes temporels et spirituels d'une nouvelle chrétienté; zunächst spanisch ), paris (fernand aubier), ( ) les juifs parmi les nations, paris, cerf, situation de la poesie, questions de conscience : essais et allocutions, paris, desclée de brouwer, la personne humaine et la societé, paris le crépuscule de la civilisation, paris, Éd. les nouvelles lettres, quattre essais sur l'ésprit dans sa condition charnelle, paris ( ) de la justice politique, notes sur le présente guerre, paris a travers le désastre, new york ( ) conféssion de foi, new york la pensée de st.paul, new york (paris ) les droits de l'homme et la loi naturelle, new york (paris ) christianisme et démocratie, new york (paris ) principes d'une politique humaniste, new york (paris ); de bergson à thomas d'aquin, essais de métaphysique et de morale, new york (paris ) a travers la victoire, paris ; pour la justice, articles et discours – , new york ; le sort de l'homme, neuchâtel ; court traité de l'existence et de l'existant, paris ; la personne et le bien commun, paris ; raison et raisons, essais détachés, paris la signification de l'athéisme contemporain, paris neuf leçons sur les notions premières de la philosophie morale, paris approaches de dieu, paris . l'homme et l'etat (engl.: man and state, ) paris, puf, pour une philosophie de l'éducation, paris le philosophe dans la cité, paris la philosophie morale, vol. i: examen historique et critique des grands systèmes, paris dieu et la permission du mal, carnet de notes, paris, ddb, l'intuition créatrice dans l'art et dans la poésie, paris, desclée de brouwer, (engl. ) le paysan de la garonne. un vieux laïc s’interroge à propos du temps présent, paris, ddb, de la grâce et de l'humanité de jésus, de l'Église du christ. la personne de l'église et son personnel, paris approaches sans entraves, posthum . la loi naturelle ou loi non écrite, texte inédit, établi par georges brazzola. fribourg, suisse: Éditions universitaires, . [lectures on natural law. tr. william sweet. in the collected works of jacques maritain, vol. vi, notre dame, in: university of notre dame press, (forthcoming).] oeuvres complètes de jacques et raïssa maritain, bde., – . see also[edit] personalism notes[edit] ^ deweer, dries ( ). "the political theory of personalism: maritain and mounier on personhood and citizenship" (pdf). international journal of philosophy and theology. ( ): . doi: . / . . . issn  - . s cid  . ^ donald demarco. "the christian personalism of jacques maritain". ewtn. archived from the original on december . ^ hanna , p.  ^ piero viotto, grandi amicizie: i maritain e i loro contemporanei, , https://books.google.com/books?id=aonog klodic&pg=pa accessed february . jean leclercq, di grazia in grazia: memorie, . https://books.google.com/books?id=jxknmftj ac&pg=pa accessed february ^ "what was the congress for cultural freedom?". www.newcriterion.com. ^ picón, maría laura ( ). ""jacques maritain y los pequeños hermanos de jesús"". studium filosofía y teología (in spanish). t. , fasc. : – . issn  - . ^ "an interview with jacques maritain | commonweal magazine". www.commonwealmagazine.org. retrieved november . ^ the most comprehensive biography of the maritians is: jean-luc barre, "jacques and raissa maritain: beggars for heaven", university of notre dame press. ^ beatification process for jacques and raissa maritain could begin on youtube ( february ) ^ hanna , p.  ^ richard francis crane ( ). "heart-rending ambivalence: jacques maritain and the complexity of postwar catholic philosemitism". studies in christian-jewish relations. : – . ^ maritain, an essay on christian philosophy, (ny: philosophical library, ), pp. ff. ^ "wolfe, c.j. "lessons from the friendship of jacques maritain with saul alinsky" he catholic social science review ( ): – " (pdf). archived from the original (pdf) on april . ^ doering, bernard e. ( ). "jacques maritain and his two authentic revolutionaries" (pdf). in kennedy, leonard a (ed.). thomistic papers. . houston, tex.: center for thomistic studies. pp.  – . isbn  - - - . oclc  . ^ fimister, alan paul ( ). robert schuman: neo-scholastic humanism and the reunification of europe. p.  . isbn  - - - - . oclc  . ^ ferrara, christopher a. "saul alinsky and 'saint' pope paul vi: genesis of the conciliar surrender to the world". the remnant newspaper. retrieved november . ^ "interview pope francis". la croix (in french). may . issn  - . retrieved november . ^ sweet, william ( ). "jacques maritain". the stanford encyclopedia of philosophy. metaphysics research lab, stanford university. ^ denis j. m. bradley. aquinas on the twofold human good: reason and human happiness in aquinas's moral science. washington: the catholic university of america press, . ^ tracey rowland, "culture and the thomist tradition: after vatican ii" (routledge radical orthodoxy) ^ thaddeus j. kozinski, the political problem of religious pluralism: and why philosophers can't solve it, (lexington books, ) ^ maritain, jacques ( ). st. thomas aquinas: angel of the schools. j. f. scanlan (trans.). london: sheed & ward. p. viii. references[edit] g. b. phelan, jacques maritain, ny, . j.w. evans in catholic encyclopaedia vol xvi supplement – . michael r. marrus, "the ambassador & the pope; pius xii, jacques maritain & the jews", commonweal, october h. bars, maritain en notre temps, paris, . d. and i. gallagher, the achievement of jacques and raïssa maritain: a bibliography, – , ny, . j. w. evans, ed., jacques maritain: the man and his achievement, ny, . c. a. fecher, the philosophy of jacques maritain, westminster, md, . jude p. dougherty, jacques maritain: an intellectual profile, catholic university of america press, ralph mcinerny, the very rich hours of jacques maritain: a spiritual life, university of notre dame press, hanna, martha ( ). the mobilization of intellect: french scholars and writers during the great war. harvard university press. isbn  . further reading[edit] the social and political philosophy of jacques maritain ( ) w. herberg (ed.), four existentialist theologians ( ) the philosophy of jacques maritain ( ) jacques maritain, antimodern or ultramodern?: an historical analysis of his critics, his thought, and his life ( ) external links[edit] quotations related to jacques maritain at wikiquote Études maritainiennes-maritain studies maritain center, kolbsheim (in french) cercle d'etudes j. & r. maritain at kolbsheim (france). jacques maritain center at the university of notre dame. stanford encyclopedia of philosophy: jacques maritain by william sweet. international jacques maritain institute. どんな効果がある? – 健康食品として知られるフコイダンとは of the primary and secondary literatures on jacques maritain. works by or about jacques maritain in libraries (worldcat catalog) jacques maritain, man and the state ( ) v t e aesthetics topics philosophers abhinavagupta theodor w. adorno leon battista alberti thomas aquinas hans urs von balthasar alexander gottlieb baumgarten clive bell bernard bosanquet edward bullough r. g. collingwood ananda coomaraswamy arthur danto john dewey denis diderot hubert dreyfus curt john ducasse thierry de duve roger fry nelson goodman clement greenberg georg hegel martin heidegger david hume immanuel kant paul klee susanne langer theodor lipps györgy lukács jean-françois lyotard joseph margolis jacques maritain thomas munro friedrich nietzsche josé ortega y gasset dewitt h. parker stephen pepper david prall jacques rancière ayn rand louis lavelle george lansing raymond i. a. richards george santayana friedrich schiller arthur schopenhauer roger scruton irving singer rabindranath tagore giorgio vasari morris weitz johann joachim winckelmann richard wollheim more... theories classicism evolutionary aesthetics historicism modernism new classical postmodernism psychoanalytic theory romanticism symbolism more... concepts aesthetic emotions aesthetic interpretation art manifesto avant-garde axiology beauty boredom camp comedy creativity cuteness disgust ecstasy elegance entertainment eroticism fun gaze harmony judgement kama kitsch life imitating art magnificence mimesis perception quality rasa recreation reverence style sthayibhava sublime taste work of art related aesthetics of music applied aesthetics architecture art arts criticism feminist aesthetics gastronomy history of painting humour japanese aesthetics literary merit mathematical beauty mathematics and architecture mathematics and art medieval aesthetics music theory neuroesthetics painting patterns in nature philosophy of design philosophy of film philosophy of music poetry sculpture theory of painting theory of art tragedy visual arts index outline category  philosophy portal v t e social and political philosophy ancient philosophers aristotle augustine chanakya cicero confucius han fei lactantius laozi mencius mozi origen philo plato plutarch polybius shang socrates sun tzu tertullian thucydides valluvar xenophon xunzi medieval philosophers accursius alpharabius augustinus triumphus avempace averroes baldus bartholomew bartolus bruni dante engelbert gelasius al-ghazali giles godfrey gratian gregory hostiensis ibn khaldun ibn tufail john of paris john of salisbury latini maimonides marsilius nizam al-mulk photios remigio thomas aquinas wang william of ockham early modern philosophers ammirato beza boétie bodin bossuet botero buchanan calvin cumberland duplessis-mornay erasmus filmer grotius guicciardini harrington hayashi hobbes hotman huang james leibniz locke luther machiavelli malebranche mariana milton montaigne more müntzer naudé pufendorf rohan sansovino sidney spinoza suárez th– th-century philosophers bakunin bentham bolingbroke bonald bosanquet burke comte condorcet constant cortés emerson engels fichte fourier franklin godwin hamann hegel herbert herder hume jefferson justi kant political philosophy kierkegaard le bon le play madison maistre marx mazzini mill montesquieu möser nietzsche novalis paine renan rousseau royce sade schiller smith spencer spedalieri stirner taine thoreau tocqueville vico vivekananda voltaire th– st-century philosophers adorno ambedkar arendt aurobindo aron azurmendi badiou baudrillard bauman benoist berlin bernstein butler camus chomsky de beauvoir debord dewey dmowski du bois durkheim dworkin foucault gandhi gauthier gehlen gentile gramsci habermas hayek heidegger irigaray kautsky kirk koneczny kropotkin laclau lenin luxemburg manent mansfield mao marcuse maritain michels mises mou mouffe negri niebuhr nozick nursî oakeshott ortega pareto pettit plamenatz polanyi popper qutb radhakrishnan rand rawls rothbard russell santayana sartre scanlon schmitt scruton searle shariati shklar simmel simonović skinner sombart sorel spann spooner spirito strauss sun taylor tucker walzer weber Žižek social theories anarchism authoritarianism collectivism communism communitarianism conflict theories confucianism consensus theory conservatism contractualism cosmopolitanism culturalism fascism feminist political theory gandhism hindu nationalism (hindutva) individualism islam islamism legalism liberalism libertarianism mohism national liberalism republicanism social constructionism social constructivism social darwinism social determinism socialism utilitarianism related articles jurisprudence philosophy and economics philosophy of education philosophy of history philosophy of love philosophy of sex philosophy of social science political ethics social epistemology index category v t e history of catholic theology key figures general history of the catholic church timeline history of the papacy papal primacy ecumenical councils catholic bible vulgate biblical canon history of christian theology early church paul clement of rome first epistle of clement didache ignatius of antioch polycarp epistle of barnabas the shepherd of hermas aristides of athens justin martyr epistle to diognetus irenaeus montanism tertullian origen antipope novatian cyprian constantine to pope gregory i eusebius athanasius of alexandria arianism pelagianism nestorianism monophysitism ephrem the syrian hilary of poitiers cyril of jerusalem basil of caesarea gregory of nazianzus gregory of nyssa ambrose john chrysostom jerome augustine of hippo john cassian orosius cyril of alexandria peter chrysologus pope leo i boethius pseudo-dionysius the areopagite pope gregory i early middle ages isidore of seville john climacus maximus the confessor monothelitism ecthesis bede john of damascus iconoclasm transubstantiation dispute predestination disputes paulinus ii of aquileia alcuin benedict of aniane rabanus maurus paschasius radbertus john scotus eriugena high middle ages roscellinus gregory of narek berengar of tours peter damian anselm of canterbury joachim of fiore peter abelard decretum gratiani bernard of clairvaux peter lombard anselm of laon hildegard of bingen hugh of saint victor dominic de guzmán robert grosseteste francis of assisi anthony of padua beatrice of nazareth bonaventure albertus magnus boetius of dacia henry of ghent thomas aquinas siger of brabant thomism roger bacon mysticism and reforms ramon llull duns scotus dante alighieri william of ockham richard rolle john of ruusbroec catherine of siena bridget of sweden meister eckhart johannes tauler walter hilton the cloud of unknowing heinrich seuse geert groote devotio moderna julian of norwich thomas à kempis nicholas of cusa marsilio ficino girolamo savonarola giovanni pico della mirandola reformation counter-reformation erasmus thomas cajetan thomas more john fisher johann eck francisco de vitoria thomas of villanova ignatius of loyola francisco de osuna john of Ávila francis xavier teresa of Ávila luis de león john of the cross peter canisius luis de molina (molinism) robert bellarmine francisco suárez lawrence of brindisi francis de sales baroque period to french revolution tommaso campanella pierre de bérulle pierre gassendi rené descartes mary of jesus of Ágreda antónio vieira jean-jacques olier louis thomassin jacques-bénigne bossuet françois fénelon cornelius jansen (jansenism) blaise pascal nicolas malebranche giambattista vico alphonsus liguori louis de montfort maria gaetana agnesi alfonso muzzarelli johann michael sailer clement mary hofbauer bruno lanteri th century joseph görres félicité de la mennais luigi taparelli antonio rosmini ignaz von döllinger john henry newman henri lacordaire jaime balmes gaetano sanseverino giovanni maria cornoldi wilhelm emmanuel freiherr von ketteler giuseppe pecci joseph hergenröther tommaso maria zigliara matthias joseph scheeben Émile boutroux modernism neo-scholasticism léon bloy désiré-joseph mercier friedrich von hügel vladimir solovyov marie-joseph lagrange george tyrrell maurice blondel thérèse of lisieux th century g. k. chesterton reginald garrigou-lagrange joseph maréchal pierre teilhard de chardin jacques maritain Étienne gilson ronald knox dietrich von hildebrand gabriel marcel marie-dominique chenu romano guardini edith stein fulton sheen henri de lubac daniel-rops jean guitton josemaría escrivá nouvelle théologie karl rahner yves congar bernard lonergan emmanuel mounier jean daniélou hans urs von balthasar alfred delp thomas merton rené girard johann baptist metz jean vanier henri nouwen st century carlo maria martini pope benedict xvi walter kasper raniero cantalamessa michał heller peter kreeft jean-luc marion tomáš halík scott hahn  catholicism portal authority control general integrated authority file (germany) isni viaf worldcat national libraries norway spain france (data) catalonia italy united states latvia japan czech republic australia greece israel korea croatia netherlands poland vatican art research institutes artist names (getty) scientific databases cinii (japan) other faceted application of subject terminology musicbrainz artist rero (switzerland) social networks and archival context sudoc (france) trove (australia) retrieved from "https://en.wikipedia.org/w/index.php?title=jacques_maritain&oldid= " categories: births deaths writers from paris lycée henri-iv alumni french political philosophers french roman catholic writers french roman catholics catholic philosophers epistemologists metaphysicians thomist philosophers virtue ethicists th-century french philosophers benedictine oblates christian humanists ambassadors of france to the holy see converts to roman catholicism from atheism or agnosticism committee on social thought french male writers critics of atheism corresponding fellows of the medieval academy of america hidden categories: cs : long volume value cs spanish-language sources (es) cs french-language sources (fr) articles with short description short description matches wikidata articles lacking in-text citations from november all articles lacking in-text citations use dmy dates from december articles with hcards wikipedia articles needing clarification from april all articles with specifically marked weasel-worded phrases articles with specifically marked weasel-worded phrases from june wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with bibsys identifiers wikipedia articles with bne identifiers wikipedia articles with bnf identifiers wikipedia articles with cantic identifiers wikipedia articles with iccu identifiers wikipedia articles with lccn identifiers wikipedia articles with lnb identifiers wikipedia articles with ndl identifiers wikipedia articles with nkc identifiers wikipedia articles with nla identifiers wikipedia articles with nlg identifiers wikipedia articles with nli identifiers wikipedia articles with nlk identifiers wikipedia articles with nsk identifiers wikipedia articles with nta identifiers wikipedia articles with plwabn identifiers wikipedia articles with vcba identifiers wikipedia articles with ulan identifiers wikipedia articles with cinii identifiers wikipedia articles with fast identifiers wikipedia articles with musicbrainz identifiers wikipedia articles with rero identifiers wikipedia articles with snac-id identifiers wikipedia articles with sudoc identifiers wikipedia articles with trove identifiers wikipedia articles with worldcatid identifiers ac with elements wikipedia articles with multiple identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons wikiquote languages العربية azərbaycanca Беларуская català Čeština cymraeg deutsch eesti español euskara فارسی français galego 한국어 Հայերեն bahasa indonesia italiano Қазақша kiswahili latina magyar മലയാളം مصرى nederlands 日本語 norsk bokmål piemontèis polski português română Русский slovenčina slovenščina suomi svenska தமிழ் türkçe Українська yorùbá 中文 edit links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement epadd discovery module collection contributor guide - google docs javascript isn't enabled in your browser, so this file can't be opened. enable and reload. epadd discovery module collection contributor guide        share sign in the version of the browser you are using is no longer supported. please upgrade to a supported browser.dismiss file edit view tools help accessibility debug see new changes computer professionals for social responsibility - wikipedia computer professionals for social responsibility from wikipedia, the free encyclopedia jump to navigation jump to search computer professionals for social responsibility abbreviation cpsr formation dissolved type ngo purpose promote responsible use of computer technology headquarters seattle, washington website cpsr.org computer professionals for social responsibility (cpsr) was a global organization promoting the responsible use of computer technology. cpsr was incorporated in following discussions and organizing that began in .[ ] it educated policymakers and the public on a wide range of issues. cpsr incubated numerous projects such as privaterra, the public sphere project, the electronic privacy information center, the st century project, the civil society project, and the computers, freedom and privacy conference. founded by u.s. computer scientists at stanford university and xerox parc, cpsr had members in over countries on six continents. cpsr was a non-profit .c. organization registered in california. when cpsr was established, it was concerned solely about the use of computers in warfare. it was focused on the strategic computing initiative, a us defense project to use artificial intelligence in military systems, but added opposition to the strategic defense initiative (sdi) shortly after the program was announced. the boston chapter helped organize a debate related to the software reliability of sdi systems which drew national attention ("software seen as obstacle in developing 'star wars', philip m. boffey, (the new york times, september , ) to these issues. later, workplace issues, privacy, and community networks were added to cpsr's agenda. cpsr began as a chapter-based organization and had chapters in palo alto, boston, seattle, austin, washington dc, portland (oregon) and other us locations as well as a variety of international chapters including peru and spain. the chapters often developed innovative projects including a slide show about the dangers of launch on warning (boston chapter) and the seattle community network (seattle chapter). cpsr sponsored two conferences: the participatory design conferences which was held biennially[ ] and the directions and implications of advanced computing (diac) symposium series which was launched in in seattle. the diac symposia have been convened roughly every other year since that time in conjunction with the community information research network (cirn) annual conference. four books (directions and implications of advanced computing; reinventing technology, rediscovering community; community practice in the network society; shaping the network society; "liberating voices: a pattern language for communication revolution") and two special sections in the communications of the acm ("social responsibility" and "social computing") resulted from the diac symposia. cpsr awarded the norbert wiener award for social and professional responsibility. some notable recipients include david parnas, joseph weizenbaum, kristen nygaard, barbara simons, antonia stone, peter g. neumann, marc rotenberg, mitch kapor, and douglas engelbart. the final award in went posthumously to the organisation's first executive director, gary chapman.[ ] the organisation was dissolved in may .[ ] references[edit] ^ history ^ participatory design conference, listing – , university of trier ^ a b "cpsr dissolution and gary chapman, winner of cpsr's norbert wiener award" by douglas schuler, public sphere project, may external links[edit] computer professionals for social responsibility documentary film about norbert wiener award winner, joseph weizenbaum ("weizenbaum. rebel at work." ) computer professionals for social responsibility records, – . charles babbage institute, university of minnesota. oral history interview with severo ornstein and laura gould, charles babbage institute, university of minnesota. oral history interview by bruce bruemmer, , discussing the formation and activities of computer professionals for social responsibility. v t e privacy principles expectation of privacy right to privacy right to be forgotten post-mortem privacy privacy laws australia canada denmark england european union germany ghana new zealand singapore switzerland united states data protection authorities australia denmark european union france germany ireland isle of man norway philippines poland spain sweden switzerland turkey thailand united kingdom areas consumer medical workplace information privacy law financial internet personal data personal identifier privacy-enhancing technologies social networking services privacy engineering secret ballot advocacy organizations american civil liberties union center for democracy and technology computer professionals for social responsibility future of privacy forum electronic privacy information center electronic frontier foundation european digital rights global network initiative international association of privacy professionals noyb privacy international see also anonymity carding cellphone surveillance cyberstalking data security eavesdropping global surveillance electronic harassment mass surveillance privacy engineering telephone tapping human rights identity theft panopticon personality rights search warrant authority control general integrated authority file (germany) isni viaf worldcat national libraries norway united states australia other microsoft academic retrieved from "https://en.wikipedia.org/w/index.php?title=computer_professionals_for_social_responsibility&oldid= " categories: computing and society information technology organizations organizations established in organizations disestablished in privacy in the united states hidden categories: pages using infobox organization with motto or pledge wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with bibsys identifiers wikipedia articles with lccn identifiers wikipedia articles with nla identifiers wikipedia articles with ma identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages català español 日本語 polski edit links this page was last edited on march , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement a line in the sand – mayo clinic’s role in insulin research | discovery's edge research at mayo clinic about mayo clinic news network advancing the science subscribe español research at mayo clinic about mayo clinic news network advancing the science subscribe español home more stories a line in the sand – mayo clinic’s role in insulin research more stories a line in the sand – mayo clinic’s role in insulin research facebook twitter pinterest whatsapp millions of patients with diabetes lead normal lives because they take insulin on a daily basis to control their condition. prior to , their lives would have been much more tenuous. mayo clinic was at the forefront of the first clinical trials on insulin back then, ensuring that the new drug was safe and determining the proper doses for patients. bessie bakke had been perfectly healthy for years until she suddenly became fatigued, lost weight and developed fainting spells. misdiagnosed as having anemia, her condition deteriorated until she was nearly comatose. in october , her parents brought her to rochester, minnesota, where she was diagnosed by mayo clinic endocrinologist and researcher russell m. wilder, m.d., as having juvenile diabetes, as type diabetes was then known. russell wilder, m.d., . mayo's modest diabetes pioneer was considered by his peers to be "the ultimate gentleman." it was before the discovery of insulin, when patients with diabetes often were subjected to extreme treatments and had little hope for the future. "bessie b.," as she became known in the scientific literature, stayed at st. marys hospital where she was placed on a ketogenic diet, similar to the one pioneered by frederick allen, m.d., of the rockefeller institute. the diet consisted of a precise proportion of carbohydrates, proteins and fats determined by her weight and her blood glucose levels, and it included a series of adjustments to customize it for her specific metabolism. dr. wilder and his colleague, physiologist walter m. boothby, m.d., tried bessie b. on different diets during her -day stay in the hospital. it was one of the first attempts at individualized medicine. every element of her diet was prepared in a special kitchen under the direction of trained dietitians. dr. wilder and his staff accounted for and analyzed everything that went into and was eliminated from her body. laboratory tests determined how quickly she was metabolizing food so that doctors could properly balance and time her feedings. the amount of food she received was small, just to calories a day. in order to survive, patients with diabetes during that time had to nearly starve. insulin, normally produced by the pancreas, helps control glucose, or sugar, levels in the blood. when pancreatic islets or cells slow or stop producing insulin, diabetes mellitus results. the blood is flooded with glucose, potentially causing a coma and death. hired by william j. mayo, m.d., in to run mayo clinic's diabetes unit, dr. wilder had few options at his disposal to treat bessie b. putting her on an extreme diet was crucial. dr. wilder was not alone in attempting to control diabetes through diet. a handful of doctors in new york, chicago and boston also were having mixed success with these specialized diets. some patients were living for three, four or five years after the onset of diabetes, provided they adhered strictly to the feeding regimen. one of those patients was elizabeth hughes, daughter of charles evans hughes, the former u.s. chief justice and, in , secretary of state. the "hughes child," as dr. wilder would later refer to her, was kept alive by dr. allen's special ketogenic diet. after several years on the treatment, she weighed pounds and was barely able to walk. another patient was a -year-old boy from chicago named randall sprague, who was kept alive by rollin woodyatt, m.d., dr. wilder's mentor at the university of chicago. what happened next was to change medicine forever. a new era frederick banting, m.d., and charles best with one of their test dogs on the roof of the medical building at the university of toronto, august . photo courtesy of the thomas fisher rare book library, university of toronto. as they made their hospital rounds in the fall of , dr. wilder and his colleagues at mayo clinic heard the news. frederick banting, m.d. and charles best with one of their test dogs on the roof of the medical building at the university of toronto, august . courtesy, thomas fisher rare book library, university of toronto. university of toronto researcher dr. frederick banting and medical student charles best had been experimenting with dogs in the university laboratory of physiology professor dr. j.j.r. macleod. they had isolated, refined and proved the effectiveness of insulin in an arduous series of animal studies. the formal presentation of their discovery was made at a research conference that december in new haven, connecticut. "what a christmas gift that was — an extract of the pancreas developed at toronto, which effectively controls the symptoms of diabetes!" writes dr. wilder in his memoirs. "we learned still more about it at the meeting of the association of american physicians in the spring of . excitement prevailed." in early january , using a modified formula for the dog-derived insulin developed by biochemist james collip, dr. banting injected a -year-old boy named leonard thompson. daily doses of insulin allowed him to live another years. drs. banting and macleod received the nobel prize for physiology or medicine in for the discovery of insulin. correct insulin dosage critical photo of the same patient on dec. , , and again on feb. , , after receiving insulin treatment. photo courtesy of eli lilly and company archives. a stark contrast: the same patient on dec. , , and again on feb. , , after receiving insulin treatment. photo courtesy of eli lilly and company archives. the canadian doctors and researchers were instrumental in the first use of insulin, but still no one knew the best strategy to administer insulin to the wide variety of patients who were hanging on to life, some with multiple complications. getting the dosage right was critical to keeping the patients alive. too much insulin would cause a person to go hypoglycemic, with abnormally low blood glucose levels. with too little insulin, the body could no longer move glucose from the blood into the cells, causing high blood glucose levels. both conditions could lead to diabetic comas and even death in severe cases. the toronto group turned to a handful of top clinical researchers for help in determining optimal insulin dosing through experimental studies on patients. one of those was dr. russell wilder at mayo clinic. "samples of insulin were first received at mayo clinic in the early spring of ," reports dr. wilder. "they were for experimental trials ... but an adequate amount of insulin to insure everyone getting it who needed it was not available until the autumn of , and october of that year is the date which divides for us the insulin era from the pre-insulin era." it was as if someone had drawn an arbitrary line in the sands of time. a dramatic contrast: the same patient on dec. , , and again on feb. , , after insulin treatment. courtesy, eli lilly and company archives. once clinical studies and mass production would make insulin available, the line would be crossed. that was when the vast majority of patients with childhood diabetes had their first chance at a relatively normal life. most people today have no idea how dire a diagnosis of diabetes was in that time. "we had children with diabetes in the mayo clinic between october , , and october , , a three-year period," dr. wilder writes. "one was moribund on arrival, received satisfactory training and a dietary regimen. nine survived long enough to benefit from insulin. the others died before it came." meeting of the minds in november , dr. wilder traveled to ontario to attend a meeting of north america's foremost clinical experts in diabetes at the time. in addition to the toronto hosts, also present were experts from new york city, chicago, boston, rochester, new york, and indianapolis — and from eli lilly and company. the international pharmaceutical manufacturer would play a major role in the first mass production of insulin. dr. wilder's enthusiasm about the meeting leaps from the page of his notes: "never again was i to experience a thrill equal to that of being invited to attend the meeting in toronto of a small committee of experts, called together by professor j.j.r. macleod to undertake an extensive clinical evaluation of the product, insulin." over a cold weekend, the experts compared experiences on how to treat patients with the newly discovered insulin, what worked best to revive them from diabetic comas, and what symptoms might indicate potentially fatal complications. dr. wilder's notes show that the doctors were trying to establish dosage standards for insulin: "allen has given one dose every six hours. campbell gives his dosage / hour before meals." "woodyatt — reported a death contributed to by overdosage — no post mortem. four other patients receiving same dosages had symptoms but no ill effects." "gilchrist — has been testing potency of preparations on himself. reaction — fatigue — preparation increased pulse rate, tremor sensation." "banting — four children tell that they feel shaky." the physicians also discussed hyperglycemia and how to treat it and any unusual cases. if one had observed something, the others had to know. it was their only guide at this point. and they discussed the handful of patient deaths — from heart failure, sepsis and tuberculosis. autopsies were important, so they would know if insulin fatally interacted with other conditions. at the time, eli lilly and his managers had been urging drs. banting and macleod to patent their insulin formula so that it could be standardized and safely manufactured. patenting discoveries was not routine in academic circles at the time. in fact, it was frowned on, and dr. macleod was reluctant. yet colleagues told him it was the only way dosages could be mass-produced for further study. without a patented formula, others might develop weak or ineffective versions. in the middle of dr. wilder's notes is a stand-alone statement, apparently written as part of this effort and shared at the meeting: in my opinion, the course being pursued by the university of toronto in offering a patent to control the manufacture of insulin is wise and commendable. without such control it will be impossible to protect humanity from dangerous preparations. dr. w. j. mayo concurs in the above. — russell m. wilder the last three pages of notes are devoted to a clinical description of the hughes child. any doubt of her identity is erased by wilder's margin note: "this is the daughter of the chief justice of the supreme ct." the notes continue: "in august, at age , weight lbs., height inches. in november, the weight is lbs., height inches." the notes reveal that she was now on a different and more substantial diet, tolerating the insulin well, and thriving. after the discussion, the doctors were invited into her rooms, where they formed a semicircle around her as she was about to eat lunch. it was one of the most remarkable medical rounds in history. beyond the line randall sprague, m.d., one patient who survived long enough to benefit from the discovery of insulin and become a physician at mayo clinic. dr. wilder returned to minnesota after that november meeting but kept in close randall sprague, m.d., one patient who survived long enough to benefit from the discovery of insulin and become a physician at mayo clinic. correspondence with his colleagues, continuing his clinical studies in an attempt to refine insulin dosing. after patients, they published a seminal paper in in the journal of metabolism research, indicating that a range of to insulin units was needed to transition a patient to a normal diet. they dosed before breakfast and kept patients to a strict eating schedule, with meals at a.m., noon and : p.m. they presented detailed charts on patients of varying ages, offered first aid (epinephrine and orange juice) for patients slipping into lethargy, and concluded that physicians needed to treat patients qualitatively, watching them closely to establish the right dose for the individual. soon, case studies confirmed their early findings, along with two more major papers. after the arrival of insulin, children were admitted to the hospital at mayo clinic in those first six years of the new era. only were known to have died later of complications, and most of those dr. wilder attributed to care issues outside his influence. for bessie bakke, however, insulin came too late. after spending two and a half months at mayo clinic, she was doing well and had gone home, only to die a month later in october of . dr. wilder, dr. walter boothby and chemist carol beeler had fully documented bessie b.'s complete metabolic activity for the consecutive days, the first detailed clinical study of its kind. they presented their findings that december and published them in january in the journal of biological chemistry. in contrast, elizabeth hughes, who was treated with insulin, lived long enough to become one of dr. banting's first patients. when she died, she had received , insulin injections in years. young randall sprague made it across the line into the insulin era, as well. in fact, he thrived, attended medical school and eventually became a mayo clinic physician and a world-class endocrinologist. one of dr. wilder's closest colleagues in later years, dr. sprague wrote his friend's obituary in . the mayo clinic team treated diabetic patients, saving most of them thanks to insulin, while developing one of the best-tested metabolic diets for patients with diabetes in the nation and an excellent diabetes education program. they added papers on insulin use for pregnant women and patients with comorbidities. today, mayo clinic still trains patients about the proper care for their diabetes using many of the methods established by dr. wilder. the diabetic handbook that bore dr. wilder's name — the primer for diabetic patients — went through nine editions. he also launched the mayo clinic diet manual, which had six editions, the last authors being drs. michael jensen and cliff gastineau and nutritionist jennifer nelson. the american diabetes association (ada) now awards the banting medal for scientific achievement award as its highest scientific honor. three mayo clinic physicians have received the award: drs. wilder, leonard rowntree and robert rizza. and four have headed the ada, including dr. wilder. dr. wilder, who treated and saved thousands of patients and aided thousands more through his pioneering clinical research, remained the modest gentleman. in one of his last presentations before his death he said, "i can lay no claim to any great discovery, but i was a member of the crew and several of the ships engaged in exploration … and i must admit to a degree of pleasure in recalling these adventures." march tags more stories facebook twitter pinterest whatsapp previous articleclinician-investigator: translating science into better care next articlesecondary genetic findings: do you want to know? robert nellis copy link to clipboard bookmark get emails for all new posts in this blog post. mute this blog post report blog post oldest to newest newest to oldest most replied to most liked please sign in or register to post a reply. © - mayo foundation for medical education and research (mfmer). all rights reserved. #dlfteach toolkits · #dlfteach skip to main content searchdashboardcaret-downlogin or signup home #dlfteach toolkits teaching with digital primary sources welcome to the #dlfteach toolkits: lesson plans for digital library instruction! this series of openly available, peer-reviewed lesson plans and concrete instructional strategies is the result of a project led by the professional development and resource sharing subgroup. this publication emerged from #dlfteach workshops, office hours, twitter chats, and open meetings. community members and digital pedagogy practitioners expressed interest in lesson plans and session outlines which they could use as a jumping-off point for their own instruction and adapt for local contexts. #dlfteach toolkit . was published in and contains twenty-one lesson plans on a variety of topics related to digital library instruction. in , the organizers of immersive pedagogy: a symposium on teaching and learning with d, augmented and virtual reality, along with #dlfteach members, initiated volume of the toolkit with a focus on immersive technologies. to come in is a third edition of the toolkit series focused on the intersection of literacies in digital library instruciton. all lessons include learning goals, preparation, and a session outline. additional materials — including slides, handouts, assessments, and datasets — are hosted in the dlf osf repository as well as linked from each lesson. download slides to see notes for presenters, and data that is too large to render in preview. there you will also find markdown versions of each lesson plan for you to use. #dlfteach toolkit . #dlfteach toolkit volume : lesson plans on immersive pedagogy #dlfteach toolkit volume cfp #dlfteach rss legal published with google’s constant product shutdowns are damaging its brand | ars technica skip to main content biz & it tech science policy cars gaming & culture store forums subscribe close navigate store subscribe videos features reviews rss feeds mobile site about ars staff directory contact us advertise with ars reprints filter by topic biz & it tech science policy cars gaming & culture store forums settings front page layout grid list site theme black on white white on black sign in comment activity sign up or login to join the discussions! stay logged in | having trouble? sign up to comment and more sign up please just stop closing things — google’s constant product shutdowns are damaging its brand google's product support has become a joke, and the company should be very concerned. ron amadeo - apr , : am utc enlarge / an artist's rendering of google's current reputation. aurich lawson reader comments with posters participating share this story share on facebook share on twitter share on reddit it's only april, and has already been an absolutely brutal year for google's product portfolio. the chromecast audio was discontinued january . youtube annotations were removed and deleted january . google fiber packed up and left a fiber city on february . android things dropped iot support on february . google's laptop and tablet division was reportedly slashed on march . google allo shut down on march . the "spotlight stories" vr studio closed its doors on march . the goo.gl url shortener was cut off from new users on march . gmail's ifttt support stopped working march . and today, april , we're having a google funeral double-header: both google+ (for consumers) and google inbox are being laid to rest. later this year, google hangouts "classic" will start to wind down, and somehow also scheduled for is google music's "migration" to youtube music, with the google service being put on death row sometime afterward. we are days into the year, and so far, google is racking up an unprecedented body count. if we just take the official shutdown dates that have already occurred in , a google-branded product, feature, or service has died, on average, about every nine days. some of these product shutdowns have transition plans, and some of them (like google+) represent google completely abandoning a user base. the specifics aren't crucial, though. what matters is that every single one of these actions has a negative consequence for google's brand, and the near-constant stream of shutdown announcements makes google seem more unstable and untrustworthy than it has ever been. yes, there was the one time google killed google wave nine years ago or when it took google reader away six years ago, but things were never this bad. for a while there has been a subset of people concerned about google's privacy and antitrust issues, but now google is eroding trust that its existing customers have in the company. that's a huge problem. google has significantly harmed its brand over the last few months, and i'm not even sure the company realizes it. google products require trust and investment enlarge / the latest batch of dead and dying google apps. google is a platform company. be it cloud compute, app and extension ecosystems, developer apis, advertising solutions, operating-system pre-installs, or the storage of user data, google constantly asks for investment from consumers, developers, and partner companies in the things it builds. any successful platform will pretty much require trust and buy-in from these groups. these groups need to feel the platform they invest in today will be there tomorrow, or they'll move on to something else. if any of these groups loses faith in google, it could have disastrous effects for the company. consumers want to know the photos, videos, and emails they upload to google will stick around. if you buy a chromecast or google home, you need to know the servers and ecosystems they depend on will continue to work, so they don't turn into fancy paperweights tomorrow. if you take the time to move yourself, your friends, and your family to a new messaging service, you need to know it won't be shut down two years later. if you begrudgingly join a new social network that was forced down your throat, you need to know it won't leak your data everywhere, shut down, and delete all your posts a few years later. advertisement there are also enterprise customers, who, above all, like safe bets with established companies. the old adage of "nobody ever got fired for buying ibm" is partly a reference for the enterprise's desire for a stable, steady, reliable tech partner. google is trying to tackle this same market with its paid g suite program, but the most it can do in terms of stability is post a calendar detailing the rollercoaster of consumer-oriented changes coming down the pipeline. there's a slower "scheduled release track" that delays the rollout of some features, but things like a complete revamp of gmail eventually all still arrive. g suite has a "core services" list meant to show confidence in certain products sticking around, but some of the entries there, like hangouts and google talk, still get shut down. google kills product google kills its augmented reality “measure” app google is killing “google play movies & tv” on smart tvs google is killing the google shopping app google’s vr dreams are dead: google cardboard is no longer for sale ex-stadia developers dish on google’s mismanagement and poor communication view more stories developers gamble on a platform's stability even more than consumers do. consumers might trust a service with their data or spend money on hardware, but developers can spend months building an app for a platform. they need to read documentation, set up sdks, figure out how apis work, possibly pay developer startup fees, and maybe even learn a new language. they won't do any of this if they don't have faith in the long-term stability of the platform. developers can literally build their products around paid-access google apis like the google maps api, and when google does things like raise the price of the maps api by x for some use cases, it is incredibly disruptive for those businesses and harmful to google's brand. when apps like reddit clients are flagged by google play "every other month" for the crime of displaying user-generated content and when it's impossible to talk to a human at google about anything, developers are less likely to invest in your schizophrenic ecosystem. hardware manufacturers and other company partners need to be able to trust a company, too. google constantly asks hardware developers to build devices dependent on its services. these are things like google assistant-compatible speakers and smart displays, devices with chromecast built in, and android and chrome os devices. manufacturers need to know a certain product or feature they are planning to integrate will be around for years, since they need to both commit to a potentially multi-year planning and development cycle, and then it needs to survive long enough for customers to be supported for a few years. watching android things chop off a major segment of its market nine months after launch would certainly make me nervous to develop anything based on android things. imagine the risk volvo is taking by integrating the new android auto os into its upcoming polestar : vehicles need around five years of development time and still need to be supported for several years after launch. google’s shutdowns cast a shadow over the entire company with so many shutdowns, tracking google's bodycount has become a competitive industry on the internet. over on wikipedia, the list of discontinued google products and services is starting to approach the size of the active products and services listed. there are entire sites dedicated to discontinued google products, like killedbygoogle.com, the google cemetery, and didgoogleshutdown.com. advertisement i think we're seeing a lot of the consequences of google's damaged brand in the recent google stadia launch. a game streaming platform from one of the world's largest internet companies should be grounds for excitement, but instead, the baggage of the google brand has people asking if they can trust the service to stay running. in addition to the endless memes and jokes you'll see in every related comments section, you're starting to see google skepticism in mainstream reporting, too. over at the guardian, this line makes the pullquote: "a potentially sticky fact about google is that the company does have a habit of losing interest in its less successful projects." ign has a whole section of a report questioning "google's commitment." from a digital foundry video: "google has this reputation for discontinuing services that are often good, out of nowhere." one of slashgear's "stadia questions that need answers" is "can i trust you, google?" enlarge / google's phil harrison talks about the new google stadia controller. google one of my favorite examples came from a kotaku interview with phil harrison, the leader of google stadia. in an audio interview, the site lays this whopper of a question on him: "one of the sentiments we saw in our comments section a lot is that google has a long history of starting projects and then abandoning them. there's a worry, i think, from users who might think that google stadia is a cool platform, but if i'm connecting to this and spending money on this platform, how do i know for sure that google is still sticking with it for two, three, five years? how can you guys make a commitment that google will be sticking with this in a way that they haven't stuck with google+, or google hangouts, or google fiber, reader, or all the other things google has abandoned over the years?" further reading hands on with google stadia: it works, but is that enough? yikes. kotaku is totally justified to ask a question like this, but to have one of your new executives face questions of "when will your new product shut down?" must be embarrassing for google. harrison's response to this question started with a surprisingly honest acknowledgement: "i understand the concern." harrison, seemingly, gets it. he seemingly understands that it's hard to trust google after so many product shutdowns, and he knows the stadia team now faces an uphill battle. for the record, harrison went on to cite google's sizable investment in the project, saying stadia was "not a trivial product" and was a "significant cross-company effort." (also for the record: you could say all the same things about google+ a few years ago, when literally every google employee was paid to work on it. now it is dead.) harrison and the rest of the stadia team had nothing to do with the closing of google inbox, or the shutdown of hangouts, or the removal of any other popular google product. they are still forced to deal with the consequences of being associated with "google the product killer," though. if stadia was an amazon product, i don't think we would see these questions of when it would shut down. microsoft's game streaming service, project xcloud, only faces questions about feasibility and appeal, not if microsoft will get bored in two years and dump the project. page: next → reader comments with posters participating share this story share on facebook share on twitter share on reddit ron amadeo ron is the reviews editor at ars technica, where he specializes in android os and google products. he is always on the hunt for a new gadget and loves to rip things apart to see how they work. email ron@arstechnica.com // twitter @ronamadeo advertisement you must login or create an account to comment. channel ars technica ← previous story next story → related stories today on ars store subscribe about us rss feeds view mobile site contact us staff advertise with us reprints newsletter signup join the ars orbital transmission mailing list to get weekly updates delivered to your inbox. sign me up → cnmn collection wired media group © condé nast. all rights reserved. use of and/or registration on any portion of this site constitutes acceptance of our user agreement (updated / / ) and privacy policy and cookie statement (updated / / ) and ars technica addendum (effective / / ). ars may earn compensation on sales from links on this site. read our affiliate link policy. your california privacy rights | do not sell my personal information the material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of condé nast. ad choices in our time (short story collection) - wikipedia in our time (short story collection) from wikipedia, the free encyclopedia jump to navigation jump to search ernest hemingway collection three mountains press paris edition of in our time boni & liveright new york edition of in our time in our time is ernest hemingway's first collection of short stories, published in by boni & liveright, new york. its title is derived from the english book of common prayer, "give peace in our time, o lord". the collection's publication history was complex. it began with six prose vignettes commissioned by ezra pound for a edition of the little review; hemingway added twelve more and in compiled the in our time edition (with a lower-case title), which was printed in paris. to these were added fourteen short stories for the edition, including "indian camp" and "big two-hearted river", two of his best-known nick adams stories. he composed "on the quai at smyrna" for the edition. the stories' themes – of alienation, loss, grief, separation – continue the work hemingway began with the vignettes, which include descriptions of acts of war, bullfighting and current events. the collection is known for its spare language and oblique depiction of emotion, through a style known as hemingway's "theory of omission" (iceberg theory). according to his biographer michael reynolds, among hemingway's canon, "none is more confusing ... for its several parts – biographical, literary, editorial, and bibliographical – contain so many contradictions that any analysis will be flawed."[ ] hemingway's writing style attracted attention, with literary critic edmund wilson saying it was "of the first distinction";[ ] the edition of in our time is considered one of hemingway's early masterpieces.[ ] contents background and publication history . the little review . in our time . in our time contents . edition . edition structure themes style reception and legacy references sources external links background and publication history[edit] e. o. hoppé's photograph of ezra pound, taken two years before the poet taught hemingway to write in the imagist style hemingway was years old when in , shortly after he was posted to the italian front as a red cross ambulance driver, he sustained a severe wound from mortar fire. for the next six months, he recuperated in a milan hospital, where he fell in love with nurse agnes von kurowsky. shortly after his return to the us, she informed him that she was engaged to an italian officer. soon after, he turned to journalism.[ ] a few months after marrying hadley richardson in , he was posted to paris as international correspondent for the toronto star, reporting on the greco-turkish war and sporting events in spain and germany.[ ] in paris he befriended gertrude stein, ezra pound, f. scott fitzgerald, james joyce, ford madox ford, and john dos passos,[ ] establishing a particularly strong friendship with pound.[ ] pound's influence extended to promoting the young author, placing six of hemingway's poems in the magazine poetry.[ ] in august he asked hemingway to contribute a small volume to the modernist series he was editing, and bill bird was publishing for his three mountains press, which pound envisioned as the "inquest into the state of the modern english language".[ ] pound's commission turned hemingway's attention toward fiction, and had profound consequences on his development as a writer.[ ] hemingway and his wife hadley on winter holiday in chamby (montreux), on december , , nearly all of hemingway's early writing – his juvenilia and apprentice fiction, including the duplicates – was lost.[ ][ ] he had been sent on assignment to cover the conference of lausanne, leaving hadley, who was sick with a cold, behind in paris. in lausanne he spent days covering the conference, and the evenings drinking with lincoln steffens.[ ] before setting off to meet him in switzerland, thinking he would want to show his work to steffens, hadley packed all his manuscripts into a valise which was subsequently stolen at gare de lyon train station.[ ] although angry and upset, hemingway went with hadley to chamby (montreux) to ski, and apparently did not post a reward for the recovery of the valise.[ ] an early story, "up in michigan", survived the loss because gertrude stein had told him it was unprintable (in part because of a seduction scene), and he had stuffed it in a drawer.[ ] a month later in a letter to pound, he mentioned that "you, naturally, would say, "good" etc. but don't say it to me. i ain't reached that mood."[ ] in his reply, pound pointed out that hemingway had only lost "the time it will ... take you to rewrite the parts you can remember ... if the middle, i.e., form, of the story is right then one ought to be able to reassemble it from memory ... if the thing wobbles and won't reform ... then it never wd. have been right."[ ] critics are uncertain whether he took pound's advice and re-created existing stories or whether everything he wrote after the loss of the suitcase was new.[ ] the little review[edit] in february , hemingway and hadley visited italy; in rapallo they met pound, who almost certainly commissioned the prose pieces for the literary magazine the little review during their visit.[ ] still upset at the loss of his work, hemingway had not written since the previous december,[ ] but he slowly wrote six new paragraphs, submitting them for the march deadline.[ ][ ] hemingway scholar milton cohen says at that point hemingway knew the pieces for the little review would "govern the remainder of the book that pound had commissioned."[ ] the six prose pieces ranged from to words and were about war and bullfighting.[ ] the battle scenes came from the experiences of hemingway's friend chink dorman-smith who was at the battle of mons; the matador story originated from another friend, mike strater. hemingway himself witnessed the events which inspired the story about the greco-turkish war. the last of the series was taken from news of the execution of six greek cabinet ministers during the trial of the six.[ ] the little review's "exiles" edition, scheduled to be published in the spring, was finally released in october , leading with hemingway's work. it featured pieces from modernists such as gertrude stein, george antheil, e. e. cummings and jean cocteau. hemingway's vignettes were titled "in our time", suggesting a cohesive set.[ ] in our time[edit] in june , hemingway took hadley, with robert mcalmon and bird, to spain where he found a new passion with his first visits to the bullfights.[ ] during the summer he wrote five new vignettes (chapters – ), all about bullfighting,[ ] finishing the last two on his return to paris in august.[ ] that summer he also honed new narrative techniques in chapters – .[ ] in august he reported to pound that he was about to begin the last two pieces (chapters and ),[ ] implemented revisions that pound suggested, and sent the manuscript to bill bird. then he left paris with hadley (who was pregnant with their first child) for toronto,[ ] where he was living when bird finished producing the book.[ ] hemingway's passport photograph the pieces he submitted to bird were at first untitled (pound called the submission blank);[ ] later the title in our time – from the book of common prayer – was chosen.[ ] bird printed the volume on a hand-press with handmade paper, telling hemingway, "i'm going to pull something really fancy with your book".[ ] the book contained eighteen vignettes[ ] and only thirty-one pages; each one was laid out with plenty of white space, highlighting the brevity of the prose. according to cohen, the "visual suddenness intensifies its narrative abruptness, heightens the shock of violence, and the chillingly matter-of-fact tone".[ ] the book's presentation was intended as unconventional, with its use of lowercase throughout and lack of quotation marks.[ ] when challenged by american editors over the use of lowercase in the titles, hemingway admitted that it could be seen as "silly and affected".[ ] bird designed the distinctive dust jacket – a collage of newspaper articles in four languages[ ] – to highlight that the vignettes carried a sense of journalism or news.[ ] the frontispiece is a woodcut portrait of the author, which during the printing process bled through to the next page, ruining more than half the print run so that only of the copies printed were deemed suitable to sell. the rest were sent to reviewers and friends.[ ] in our time[edit] a year later hemingway was back in paris, where he wrote some of his best short stories and told scott fitzgerald that, of the new material, "indian camp" and "big two-hearted river" were superior.[ ] over the next six months, one of his most productive periods according to critic jackson benson, he wrote eight short stories.[ ] the stories were combined with the earlier vignettes and sent to boni & liveright in new york toward the end of the year.[ ] in march he was in schruns, austria when the acceptance cable and $ advance arrived, with a request to option his next two books. directly afterward, he received a letter from max perkins of scribner's, who had read bird's paris edition and thought it lacked commercial appeal, and queried whether the young writer had stories to offer to bolster the collection. in his reply, hemingway explained that he had already entered a contract with boni & liveright.[ ] when he received the contract for the book, boni & liveright requested that "up in michigan" be dropped for fear it might be censored; in response hemingway wrote "the battler" to replace the earlier story.[ ] the new york edition contained the fourteen short stories with the vignettes interwoven as "interchapters".[ ] boni & liveright published the book on october , ,[ ] with a print-run of copies, costing $ each,[ ] which saw four reprints.[ ] the firm designed a "modish" dust jacket, similar to the paris edition, and elicited endorsements from ford madox ford, gilbert seldes, john dos passos, and donald ogden stewart. boni & liveright claimed american copyright for the works published in france.[ ] hemingway was disappointed with the publisher's marketing efforts,[ ] and that december he complained to boni & liveright about their handling of the book, citing a lack of advertising, claiming they could have had " , in sales" and that he should have requested a $ advance.[ ] he later broke his contract with the firm, signing with max perkins at scribner's the following year.[ ] scribner's bought the rights from boni & liveright,[ ] releasing the second american edition on october , , which saw one reprint.[ ] the scribner's edition included an introduction by edmund wilson and hemingway's "introduction by the author", which was renamed as "on the quai at smyrna" in the publication of the fifth column and the first forty-nine stories.[ ] when in our time was re-issued in , "on the quai at smyrna" replaced "indian camp" as the first story.[ ] contents[edit] edition[edit] nick sat against the wall of the church where they had dragged him to be clear of machine-gun fire in the street. both legs stuck out awkwardly. he had been hit in the spine. the day was very hot. rinaldi, big-backed, his equipment sprawling, lay face downward against the wall ... the pink wall of the house opposite had fallen out from the roof ... two austrian dead lay in the rubble in the shade of the house. up the street were other dead. things were getting forward in the town. —ernest hemingway, "chapter ", in our time[ ] the in our time collection consists of eighteen vignettes.[ ] five center on world war i (chapters , , , , ), and six on bullfighting (chapters and to ); the others center around news stories.[ ] chapter is the longest; it details a soldier's affair with a red cross nurse,[ ] and is based on hemingway's relationship with agnes von kurowsky.[ ] the piece about a robbery and murder in kansas city originated in a newspaper story hemingway covered as a cub reporter at the kansas city star;[ ] it is followed by the story of the public hanging of the chicago mobster sam cardinelli. the last, "l'envoi", is about the king of greece and sophia of prussia giving an interview in the palace garden during the revolution.[ ] edition[edit] the new york edition begins with the short stories "indian camp" and "the doctor and the doctor's wife". the two are linked thematically; they are set in michigan and introduce nick adams. nick witnesses an emergency caesarean section and a suicide in the first; the tension between his parents in the second. the next story, "the end of something", is also set in michigan, and details nick's break-up with his girlfriend; "the three-day blow" follows, where nick and a friend get drunk. "the battler" is about nick's chance encounter with a prize-fighter. "a very short story", which was the longest vignette in the previous edition, comes next and is followed by "soldier's home", set in oklahoma, and "the revolutionist", set in italy. the next three are set in europe and detail unhappy marriages: "mr. and mrs. elliot", "cat in the rain" and "out of season". they are placed before nick's reappearance in "cross country snow", which takes place in switzerland. the penultimate "my old man" concerns horse-racing in italy and paris, and the volume ends with the two-part nick adams story "big two-hearted river", set in michigan. the vignettes were re-ordered and placed between the short stories as interchapters.[ ] structure[edit] whether the collection has a unified structure has been a source of debate among hemingway critics. according to reynolds the collection should be "read as a predictable step in any young author's career" and the pieces considered as "discrete units".[ ] yet he admits that hemingway's remarks, and the complexity of the structure, suggests the stories and vignettes were meant to be an interconnected whole.[ ] in a letter to pound in august , hemingway told him he had finished the full set of eighteen vignettes, saying of them, "when they are read together, they all hook up ... the bulls start, then reappear, then finish off. the war starts clear and noble just like it did ... gets close and blurred and finished with the feller who goes home and gets clap."[ ] he went on to say of "in our time" that "it has form all right".[ ] on october , , he wrote to edmund wilson, "finished the book of stories with a chapter of 'in our time' between each story – that is the way they are meant to go – to give the picture of the whole before examining it in detail".[ ] benson notes that all the fiction hemingway had produced was included in the collection, that the connection between stories and vignettes is tenuous at best, and that pound had an influence in editing the final product.[ ] benson calls the work a "prose poem of terror", where looking for connections is meaningless.[ ] conversely, linda wagner-martin suggests the unrelenting tone of horror and somber mood unify the separate pieces.[ ] one of its early reviewers, d. h. lawrence, referred to it as a "fragmentary novel".[ ] ernest hemingway in a milan hospital, . the -year-old author is recovering from world war i shrapnel wounds. hemingway scholar wendolyn tetlow says that from its inception the collection was written with a rhythmic and lyrical unity reminiscent of pound's "hugh selwyn mauberley" and t.s. eliot's the waste land.[ ] the carefully crafted sequence continues in the edition, beginning with the first five nick adams stories, which are about violence and doom, empty relationships and characters lacking self-awareness.[ ] the first two stories, "indian camp" and "the doctor and the doctor's wife", can be read as an exercise in counterpoint, where feelings of loss, anger, and evil are ignored and repressed. "the end of something" and "the three-day blow" also form a pair; in the first nick breaks up with his girlfriend, in the second he gets drunk and denies the relationship has ended, convincing himself that it will all work out. this state of denial continues in "the battler", the fifth story; when faced with violence nick will not recognize that he is in danger.[ ] "a very short story", about betrayal, and wounding, ends the sequence according to tetlow, who suggests these are stories in which hemingway writes about the "most bitter feelings of loss and disillusionment". the characters face loss with inner strength, stoicism and a sense of acceptance; they build strength in the stories that come after,[ ] gaining self-awareness as they accept the futility and pain of life.[ ] the collection ends with "big two-hearted river", in which nick finds tranquility, perhaps even happiness, in solitude. wagner-martin notes that "it is this essential tranquility that in retrospect heightens the tension and sorrow of the preceding pieces."[ ] another hemingway scholar, jim berloon, disagrees with tetlow,[ ] writing that its only unity consists of similarities in tone and style and the recurrence of the nick adams character. although the first vignettes share a common thread about the war, each is distinctly framed, weakening any structural unity that might have existed.[ ] he blames the war, saying that it was too hard for hemingway to write about it cohesively, that it was "too large, terrible, and mentally overwhelming to grasp in its entirety".[ ] instead, he says, hemingway wrote fragments, "discrete glimpses into hell ... like the wreckage of battle lit up from a shell burst at night."[ ] the structure apparent in the collection of vignettes is lost in the later edition because the short stories seem to bear little if any relationship to the interchapters, shattering the carefully constructed order.[ ] the sense of discordance is intensified because the action is about unnamed men and soldiers, only referred to with pronouns, and unspecified woundings.[ ] the characters are transformed through circumstances and settings, where danger exists overtly, on the battlefield, or, in one case, by a chance sexual encounter in a chicago taxi.[ ] critic e. r. hageman notes the in our time vignettes are linked chronologically, spanning ten years from to , and the choices were deliberate. world war i and the aftermath were "the experience of his generation, the experience that dumped his peers and his elders into graves, shell-holes, hospitals, and onto gallows. these were 'in our time', hemingway is saying, and he remarks the significant and the insignificant."[ ] themes[edit] t. s. eliot, shown in a photograph, influenced hemingway's style. the stories contain themes hemingway was to revisit over the course of his career. he wrote about initiation rites, early love, marriage problems, disappointment in family life and the importance of male comradeship.[ ] the collection conjures a world of violence and war, suffering, executions; it is a world stripped of romance, where even "the hero of the bullfight chapter pukes".[ ] hemingway's early- th century is a time "out of season", where war, death, and tangled, unfulfilling relationships reign.[ ] alienation in the modern world is particularly evident in "out of season", which bears similarities to eliot's the waste land.[ ] eliot's waste land motif exists throughout much of hemingway's early fiction, but is most notable in this collection, the sun also rises ( ), and a farewell to arms ( ). he borrowed eliot's device of using imagery to evoke feeling.[ ] benson attributes similarities between hemingway and eliot to pound, who edited both.[ ] motifs and themes reappear, the most obvious being the juxtaposition of life and death. there are some recurring images such as water and darkness – places of safety.[ ] benson notes how, after reading the first few vignettes and stories, readers "realize we are in hell." hemingway conjures a world where "the weak are pitilessly exploited by the strong, and ... all functions of life ... promise only pain."[ ] hemingway's semi-autobiographical character nick adams is "vital to hemingway's career", writes mellow,[ ] and generally his character reflects hemingway's experiences.[ ] nick, who features in eight of the stories,[ ] is an alter ego, a means for hemingway to express his own experiences, from the first story '"indian camp" which features nick as a child.[ ] according to critic howard hannum, the trauma of birth and suicide hemingway paints in "indian camp" rendered a leitmotif that gave hemingway a unified framework for the nick adams stories.[ ] it is followed by "the doctor and the doctor's wife", which mellows says is written with a sense of "hostility and resignation", and sheds a rare light on hemingway's childhood. in the story, -year-old nick hides from his angry and violent father; the mother, a christian scientist, is distanced, withdrawn in her bedroom, reading science and health.[ ] "big two-hearted river", the concluding and climactic piece, details nick's return from war.[ ] in it nick knows he has left his needs behind; debra moddelmog highlights how all of the nick storylines, and most of the others in the collection, are about a "flight from pain".[ ] she believes that gertrude stein's definition of the lost generation applies to in our time as much, if not more so, as to the sun also rises; that "nick seems to believe that the things most worth having and caring about – life, love, ideals, companions, peace, freedom – will be lost sooner or later, and he is not sure how to cope with this assurance, except through irony, bitterness, and, sometimes, wishful thinking."[ ] in the last story he learns to come to terms with the loss of his friends, and acknowledges "all the loss he has experienced in the last few years and, equally important, the loss he has come to expect."[ ] style[edit] biographer mellow believes that in our time is hemingway's most experimental book, particularly with its unusual narrative form.[ ] the vignettes have no traditional sense of narrative; they begin in the middle.[ ] shifting points-of-view and narrative perspectives disguise autobiographical details.[ ] pound taught hemingway to write sparingly.[ ] pound wrote to him that "anything put on top of the subject is bad ... the subject is always interesting enough without blankets."[ ] hemingway would write in a moveable feast (published posthumously in ), "if i started to write elaborately, like someone presenting or introducing something, i found that i could cut that scrollwork or ornament out and throw it away and start with the first true simple declarative sentence i had written."[ ] in our time was written during the author's experimentation phase, his first attempts towards a minimalist style.[ ] the prose in "indian camp" and "big two-hearted river" is sharper and more abstract than in other stories, and by employing simple sentences and diction – techniques he learned writing for newspapers – the prose is timeless with an almost mythic quality, explains benson.[ ] the tightly compressed sentence structure emulates and reflects pound's imagist style, bringing to prose narrative the stripped-down style pound famously established in with poems such as "in a station of the metro". thomas strychacz compares hemingway's prose to pound's poetry, writing, "hemingway's terse, tight-lipped, tightly wound fragments are equally extraordinary in their dramatic intensity."[ ] the taut style is apparent from the first vignette,[ ] in which a brigade of drunken soldiers march to champagne. with supreme understatement he alludes to the second battle of champagne, an offensive lasting from september to december , , in which , french troops were killed in the first three weeks:[ ] everybody was drunk. the whole battery was drunk going along the road in the dark. we were going to champagne. the lieutenant kept riding his horse out into the fields and saying to him, "i'm drunk, i tell you, mon vieux. oh i am so soused." we went along the road all night in the dark and the adjutant kept riding up alongside my kitchen and saying, "you must put it out. it is dangerous. it will be observed." we were fifty kilometers from the front but the adjutant worried about the fire in my kitchen. it was funny going along that road. that was when i was a kitchen corporal. — ernest hemingway, "chapter ", in our time.[ ] the vignette opening with the words "we were in a garden at mons" is equally understated; the narrator writes, "the first german i saw climbed up over the garden wall. we waited till he got one leg over then potted him. he ... looked awfully surprised".[ ] the description repeats images, is dispassionate and warps logic, according to strychacz.[ ] these set the model for flash fiction – fiction that is condensed without unnecessary descriptive detail.[ ] in a moveable feast hemingway wrote that "out of season", written in , was the first story where he applied the theory of omission, known as his iceberg theory. he explained that the stories in which he left out the most important parts, such as not writing about the war in "big two-hearted river", are the best of his early fiction.[ ] as carlos baker describes the technique, the hard facts float above water while the supporting structure, including the symbolism, operates out of sight.[ ] hemingway wrote in the preface to death in the afternoon, a writer may choose what to include and what to omit from a story.[ ] if a writer of prose knows enough of what he is writing about he may omit things that he knows and the reader, if the writer is writing truly enough, will have a feeling of those things as strongly as though the writer had stated them. the dignity of movement of an ice-berg is due to only one-eighth of it being above water. a writer who omits things because he does not know them only makes hollow places in his writing. — ernest hemingway, death in the afternoon[ ] reception and legacy[edit] hemingway's bullfighting scenes were compared to francisco goya's art.[ ] hemingway's writing style attracted attention after the release of the parisian edition of in our time in . edmund wilson described the writing as "of the first destinction",[ ] writing that the bullfight scenes were like francisco goya paintings, that the author "had almost invented a form of his own", and it had "more artistic dignity than any written by an american about the period of the war."[ ] the edition of in our time is considered one of hemingway's masterpieces.[ ] reviewers and critics noticed, and the collection received positive reviews on its publication.[ ] the new york times described the language as "fibrous and athletic, colloquial and fresh, hard and clean, his very prose seems to have an organic being of its own". a reviewer for time wrote, "ernest hemingway is somebody; a new honest un-'literary' transcriber of life – a writer."[ ] reviewing if for the bookman, f. scott fitzgerald wrote hemingway was an "augury" of the age and that the nick adams stories were "temperamentally new" in american fiction.[ ] his parents, however, described the book as "filth", disturbed by the passage in "a very short story" which tells of a soldier contracting gonorrhea after a sexual encounter with a sales girl in a taxicab.[ ] bird sent five copies to them which were promptly returned, eliciting a letter from hemingway, who complained, "i wonder what was the matter, whether the pictures were too accurate and the attitude toward life not sufficiently distorted to please who ever bought the book or what?"[ ] in our time was ignored and forgotten by literary critics for decades. benson attributes the neglect to various factors. the sun also rises, published the next year, is considered the more important book followed fairly rapidly by the popular a farewell to arms two years after in ; critics' general assumption seemed to be that hemingway's talent lay in writing prose rather than "sophisticated, complex design";[ ] and in our time stories were combined with subsequent collections in the publication of the fifth column and the first forty-nine stories in , drawing the critics' attention away from the book as an entity, toward the individual stories. in , when scribner's released the paperback edition of in our time, it began to be taught in american universities, and by the end of the decade, the first critical study of the collection appeared. benson describes the collection as the author's first "major achievement";[ ] wagner-martin as "his most striking work, both in terms of personal involvement and technical innovation."[ ] references[edit] ^ a b reynolds ( ), ^ a b quoted in wagner-martin ( ), ^ a b c d mellow ( ), ^ putnam, thomas ( ). "hemingway on war and its aftermath". prologue magazine. vol. , no. . retrieved november , ^ desnoyers, megan floyd. "ernest hemingway: a storyteller's legacy" jfk library. retrieved september , ^ a b reynolds ( ), ^ a b c d e f cohen ( ), – ^ a b c d smith ( ), – ^ a b reynolds ( ), ^ mellow ( ), ^ mellow ( ), ^ smith ( ), ^ cohen ( ), ^ smith ( ), ^ a b cohen ( ), ^ cohen ( ), ^ a b c oliver ( ), – ^ a b c cohen ( ), ^ cohen ( ), ^ a b c cohen ( ), – ^ a b cohen ( ), ^ a b cohen ( ), ^ a b cohen ( ), ^ a b baker ( ), ^ a b mellow ( ), ^ mellow ( ), ^ reynolds ( ), ^ a b cohen ( ), ^ waldhorn ( ), ^ tetlow ( ), ^ meyers ( ), ^ a b c smith ( ), – ^ benson ( ), ^ mellow ( ), – ^ reynolds ( ), ^ a b c mellow ( ), ^ a b hagemann ( ), ^ leff ( ), ^ a b mellow ( ), – ^ mellow ( ), ^ hagemann ( ), ^ hagemann ( ), ^ reynolds ( ), ^ hemingway ( ), ^ a b c oliver ( ), – ^ oliver ( ), ^ mellow ( ), , ^ tetlow ( ), ^ quoted in mellow ( ), ^ hemingway ( ), ^ benson ( ), , ^ benson ( ), , ^ a b wagner-martin ( ), ^ moddelmog ( ), ^ a b c tetlow ( ), ^ tetlow ( ), ^ tetlow ( ), ^ tetlow ( ), ^ tetlow ( ), ^ a b barloon ( ), ^ a b barloon ( ), ^ barloon ( ), ^ a b barloon ( ), ^ hagemann ( ), ^ a b mellow ( ), – ^ a b bickford ( ), – ^ tetlow ( ), ^ benson ( ), ^ benson ( ), ^ benson ( ), , ^ moddelmog ( ), ^ hannum ( ), – ^ mellow ( ), ^ mellow ( ), ^ moddelmog ( ), – ^ moddelmog ( ), ^ moddelmog ( ), ^ a b tetlow ( ), ^ smith ( ), ^ benson ( ), – ^ strychacz ( ), ^ tetlow ( ), ^ hemingway ( ), ^ quoted in stychacz ( ), ^ strychacz ( ), ^ hlinak ( ), ^ baker ( ), ^ quoted in smith ( ), ^ a b reynolds ( ), ^ mellow ( ), ^ a b mellow ( ), ^ benson ( ), ^ benson ( ), – ^ wagner-martin ( ), sources[edit] baker, carlos. ( ). hemingway: the writer as artist. ( th edition). princeton: princeton university press. isbn  - - - - barloon, jim. ( ). "very short stories: the miniaturization of war in hemingway's 'in our time'". the hemingway review. vol. , no. . benson, jackson. ( ). "patterns of connections and their development in hemingway's in our time". in reynolds, michael. (ed). critical essays on ernest hemingway's in our time. boston: g. k. hall. isbn  - - - - benson, jackson ( ). "ernest hemingway as short story writer". in benson, jackson. (ed). the short stories of ernest hemingway: critical essays. durham nc: duke university press. isbn  - - - - bickford, sylvester. ( ). "hemingway's italian waste land: the complex unity of 'out of season'". in beegel, susan f. (ed). hemingway's neglected short fiction. tuscaloosa: alabama university press. isbn  - - - - cohen, milton. ( ). hemingway's laboratory: the paris 'in our time'. tuscaloosa: alabama university press. isbn  - - - - cohen, milton. ( ). "who commissioned the little review's 'in our time'?". the hemingway review. vol. , no. . hagemann, e. r. ( ). "'only let the story end as soon as possible': time-and-history in ernest hemingway's in our time". in reynolds, michael. (ed) critical essays on ernest hemingway's in our time. boston: g. k. hall. isbn  - - - - hannum, howard. ( ). "'scared sick looking at it': a reading of nick adams in the published stories". twentieth century literature. vol. , no. . hemingway, ernest. ( / ) in our time. ( ed). new york: scribner. isbn  - - - - hemingway, ernest. ( ). ernest hemingway selected letters – . in baker, carlos. (ed). new york: charles scribner's sons. isbn  - - - - hlinak, matt. ( ). "hemingway's very short experiment: from 'a very short story' to a a farewell to arms". the journal of the midwest modern language association. vol. , no. . leff, leonard ( ). hemingway and his conspirators: hollywood, scribner's and the making of american celebrity culture. lanham, md: rowman & littlefield. isbn  - - - - mellow, james. ( ) hemingway: a life without consequences. new york: houghton mifflin. isbn  - - - - meyers, jeffrey. ( ). hemingway: a biography. new york: macmillan. isbn  - - - - moddelmog, debra. ( ). "the unifying consciousness of a divided conscience: nick adams as author of 'in our time'". american literature. vol. , no. . oliver, charles. ( ). ernest hemingway a to z: the essential reference to the life and work. new york: checkmark publishing. isbn  - - - - . reynolds, michael ( ). "ernest hemingway, – : a brief biography". in wagner-martin, linda. (ed). a historical guide to ernest hemingway. new york: oxford university press. isbn  - - - - reynolds, michael. ( ). hemingway's 'in our time': the biography of a book. in kennedy, gerald j. (ed). modern american short story sequences. cambridge: cambridge university press. isbn  - - - - reynolds, michael. ( ). hemingway: the paris years. new york: norton. isbn  - - - - smith, paul. ( ). " : hemingway's luggage and the miraculous year". in donaldson, scott (ed). the cambridge companion to ernest hemingway. new york: cambridge university press. isbn  - - - - smith, paul. ( ). "hemingway's early manuscripts: the theory and practice of omission". journal of modern literature. vol. , no. . strychacz, thomas. ( ). ''in our time, out of season". in donaldson, scott (ed). the cambridge companion to ernest hemingway. new york: cambridge university press. isbn  - - - - tetlow, wendolyn e. ( ). hemingway's "in our time": lyrical dimensions. cranbury nj: associated university presses. isbn  - - - - wagner-martin, linda. ( ). "introduction". in wagner-martin, linda (ed). ernest hemingway's the sun also rises: a casebook. new york: oxford university press. isbn  - - - - wagner-martin, linda. ( ). "juxtaposition in hemingway's in our time". in reynolds, michael. (ed). critical essays on ernest hemingway's in our time. boston: g. k. hall. isbn  - - - - waldhorn, arthur. ( edition). a reader's guide to ernest hemingway. syracuse, ny: syracuse university press. isbn  - - - - external links[edit] in our time ( edition) at project gutenberg ernest hemingway collection, jfk library v t e ernest hemingway bibliography novels the torrents of spring ( ) the sun also rises ( ) a farewell to arms ( ) to have and have not ( ) for whom the bell tolls ( ) across the river and into the trees ( ) the old man and the sea ( ) non-fiction death in the afternoon ( ) green hills of africa ( ) posthumous a moveable feast ( ) islands in the stream ( ) the dangerous summer ( ) the garden of eden ( ) true at first light ( ) under kilimanjaro ( ) short stories "indian camp" ( ) "the doctor and the doctor's wife" ( ) "the end of something" ( ) "the three-day blow" ( ) "the battler" ( ) "a very short story" ( ) "soldier's home" ( ) "the revolutionist" ( ) "mr. and mrs. elliot" ( ) "cat in the rain" ( ) "out of season" ( ) "cross country snow" ( ) "my old man" ( ) "big two-hearted river" ( ) "a clean, well-lighted place" ( ) "a canary for one" ( ) "fifty grand" ( ) "hills like white elephants" ( ) "the killers" ( ) "the undefeated" ( ) "che ti dice la patria?" ( ) "in another country" ( ) "on the quai at smyrna" ( ) "fathers and sons" ( ) "a day's wait" ( ) "the gambler, the nun, and the radio" ( ) "a way you'll never be" ( ) "the snows of kilimanjaro" ( ) "the capital of the world" ( ) "the short happy life of francis macomber" ( ) short story collections three stories and ten poems ( ) in our time ( ) men without women ( ) winner take nothing ( ) the fifth column and the first forty-nine stories ( ) the snows of kilimanjaro ( ) the fifth column and four stories of the spanish civil war ( ) the nick adams stories ( ) the complete short stories of ernest hemingway ( ) ernest hemingway: the collected stories ( ) story fragments "on writing" poetry poems ( ) complete poems screenplays the spanish earth ( film) letters and journalism by-line: ernest hemingway ( ) ernest hemingway selected letters – ( ) dateline: toronto ( ) the cambridge edition of the letters of ernest hemingway ( ) adaptations the sun also rises film film opera the select (the sun also rises) ballet "the killers" film film film bukowski short story a farewell to arms film film tv series to have and have not film the breaking point ( ) the gun runners ( ) captain khorshid ( ) for whom the bell tolls film tv play tv series song the old man and the sea film film animated film other film adaptations the macomber affair ( ) under my skin ( ) the snows of kilimanjaro ( ) hemingway's adventures of a young man ( ) islands in the stream ( ) soldier's home ( ) my old man ( ) after the storm ( ) the garden of eden ( ) across the river and into the trees (upcoming) homes birthplace and boyhood home michigan cottage hemingway-pfeiffer house key west home hotel ambos mundos, havana home finca vigía, cuba home idaho home depictions bacall to arms ( cartoon) hemingway: on the edge ( play) in love and war ( film) midnight in paris ( film) hemingway & gellhorn ( film) cooper & hemingway: the true gen ( documentary) papa: hemingway in cuba ( film) hemingway ( documentary series) related nick adams pilar (boat) iceberg theory ernest hemingway international billfishing tournament maxwell perkins adriana ivancich hemingway foundation/pen award hello hemingway ( film) michael palin's hemingway adventure ( documentary) hemingway crater kennedy library hemingway collection family elizabeth hadley richardson (first wife) jack hemingway (son) pauline pfeiffer (second wife) patrick hemingway (son) gregory hemingway (son) martha gellhorn (third wife) mary welsh hemingway (fourth wife) lorian hemingway (granddaughter) margaux hemingway (granddaughter) john hemingway (grandson) mariel hemingway (granddaughter) grace hall hemingway (mother) leicester hemingway (brother) authority control general integrated authority file (germany) viaf worldcat (via viaf) national libraries france (data) united states other sudoc (france) retrieved from "https://en.wikipedia.org/w/index.php?title=in_our_time_(short_story_collection)&oldid= " categories: short story collections short story collections by ernest hemingway hidden categories: articles with short description short description is different from wikidata featured articles articles with project gutenberg links wikipedia articles with gnd identifiers wikipedia articles with viaf identifiers wikipedia articles with bnf identifiers wikipedia articles with lccn identifiers wikipedia articles with sudoc identifiers wikipedia articles with worldcat-viaf identifiers wikipedia articles with multiple identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية বাংলা deutsch español français bahasa indonesia nederlands norsk bokmål polski Українська 中文 edit links this page was last edited on may , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement #dlfteach toolkit volume cfp · #dlfteach skip to main content searchdashboardcaret-downlogin or signup home #dlfteach toolkits teaching with digital primary sources #dlfteach toolkit volume : call for proposals the dlf digital library pedagogy group invites all interested digital pedagogy practitioners to contribute to a literacy and competency centered #dlfteach toolkit, an online, open resource focused on lesson plans and concrete instructional strategies. we welcome contributors from academic and other educational institutions, including public and special libraries, in any setting, role, and career stage.   the dlf digital library pedagogy group (aka #dlfteach) is a grassroots community of practice within the larger digital library federation that is open to anyone interested in learning about or collaborating on digital library pedagogy. toolkit volume will emphasize the teaching of literacies and competencies foundational for digital scholarship and digital humanities work and/or literacies and competencies acquired through the act of engaging in such work. by “literacies” we mean visual literacy, digital literacy, data literacy, and information literacy, etc. by “competencies” we mean foundational digital skills that provide both a practical and critical understanding of digital technologies (see bryn mawr's digital competencies for more information).  example lessons include: a semester long digital exhibit project that has students creating metadata for visual materials related to the topic of their course. in addition to a librarian-led session on the mechanics of the platform, students learn about critical approaches to metadata and must include a reflective analysis on the potential for bias in their choice of descriptive keywords and subject terms. the project engages with information literacy frames like "authority is constructed and contextual" as well as visual literacy competencies like "acquires and organizes images and source information."  a project training session in which team members (faculty, staff, and students) design a data model that can address the project's research questions. the exercise works for multiple audiences and establishes a foundation in data literacy for all participants. concepts like tidy data, data types, and query languages are introduced. a blog post assignment that asks students to verbally interpret a privacy statement of an online company or institution to another student. students must include an audio file of their conversation with a peer (and fill out a consent form). this assignment has students engaging with multiple digital competencies: managing a digital identity, privacy, and security; collaborative communication; and digital writing and publishing. “lesson plans” are activities, basic exercises, assignments, project instruction, and the like that are used in situations ranging from one-off library sessions to multi-day workshops to semester-long courses. lesson plans can be designed for synchronous or asynchronous/remote instruction. the works we seek to include will be creative and critical in nature. ideally, they will push the boundaries of traditional approaches and frameworks. for instance, we are interested in lessons that highlight the intersections of literacies from different frameworks, not just alignment with the acrl framework for information literacy.  areas of critical importance include: transferable skills. how are the literacy skills gained transferable beyond that particular lesson? how might your lesson promote digital citizenship in an age of misinformation? attention to "implementation fidelity." how have you improved your lesson in response to assessment? accessibility and the digital divide. how is accessibility baked into your lesson? how do we teach technology based lessons when not everyone has equal access? our aim is to provide practitioners with lessons that can be adapted for a variety of curricular contexts and instructional roles. we highly encourage submissions that demonstrate collaborations between library staff in different roles or with instructors outside the library.   proposals are due by september st, and should be limited to words. when writing your proposals, please consider the toolkit template. proposals should include: a description of your lesson learning outcomes a statement on the literacies involved in your lesson note any collaborators (collaboration with other instructional partners is encouraged!) toolkit proposal submission form #dlfteach rss legal published with stablecoins through history — michigan bank commissioners report, – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu stablecoins through history — michigan bank commissioners report, st january rd may - by david gerard - comment a “stablecoin” is a token that a company issues, claiming that the token is backed by currency or assets held in a reserve. the token is usually redeemable in theory — and sometimes in practice. stablecoins are a venerable and well-respected part of the history of us banking! previously, the issuers were called “wildcat banks,” and the tokens were pieces of paper.   genuine as a three-dollar bill — from the american numismatic society blog.   the wildcat banking era, more politely called the “free banking era,” ran from to . banks at this time were free of federal regulation — they could launch just under state regulation. under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. the quality of these reserves could be a matter of some dispute. the wildcat banks didn’t work out so well. the national bank act was passed in , establishing the united states national banking system and the office of the comptroller of the currency — and taking away the power of state banks to issue paper notes. advocates of austrian economics often want to bring back “free banking” in this manner, because they despise the federal reserve. they come up with detailed theory as to how letting free banking happen again will surely work out well this time. [mises institute search] on march , the “general banking law” was passed in michigan. bray hammond’s classic “banks and politics in america” from (uk, us) tells how this all worked out (p. ): of her free-banking measure, michigan’s governor said: “the principles under which this law is based are certainly correct, destroying as they do the odious features of a bank monopoly and giving equal rights to all classes of the community.” within a year of the law’s passage, more than forty banks had been set up under its terms. within two years, more than forty were in receivership. thus america grew great. hammond quotes another source on the notes themselves: “get a real furioso plate, one that will take with all creation — flaming with cupids, locomotives, rural scenery, and hercules kicking the world over.” the ico white papers of their day. after the michigan law allowing free banking had been in effect for two years, michigan’s state banking commissioners reported to the legislature on how it was all going. the whole report is available as a scan in google books — documents accompanying the journal of the house of representatives of the state of michigan, pp. – . [google books] there’s also a bad ocr — ctrl-f to “bank commissioners’ report.” [internet archive] this is not your normal bureaucratic report from civil servants to the legislature. it’s a work of thundering victorian passion, excoriating the criminals and frauds the commissioners found themselves responsible for dealing with. we should have more official reports that you could do a dramatic reading of: the peculiar embarrassments which they have had to encounter, and the weighty responsibilities consequent thereupon, clothes this duty with a new character. it becomes an act of justice to themselves, and to those who have honored them with so important a trust. at the period the commissioners entered upon their labors, every portion of the state was flooded with a paper currency, issued by the institutions created under the general banking law. new organizations were daily occurring, and the public mind was everywhere agitated with apprehension and distrust. the state was in the midst of the evils consequent upon an excessive and doubtful circulation. rumors of the most frightful and reckless frauds were daily increasing. in this emergency, prompt and vigorous action was imperiously demanded, as well by the public voice as the urgent necessity of the case. upon a comparison of opinions, the commissioners united in the conclusion that their duty was of a two fold character. the first, and most obvious one, was to take immediate and decided measures in ascertaining and investigating the affairs of every institution suspected of fraud, and closing the door against the evil without delay. the second was a duty of far more difficult and delicate a nature, and involving the assumption of a deep responsibility. the report outlines the problems in each particular district, and lists the local troubled banks. the commissioners tried to distinguish fraudulent banks from merely inept ones, and help the second sort get back on their feet for the public good. most of it’s tedious detail. but there’s considerable parallels to our wonderful world of crypto: the loan of specie from established corporations, became an ordinary traffic, and the same money, set in motion a number of institutions. specie certificates, verified by oath, were every where exhibited, although these very certificates had been cancelled at the moment of their creation, by a draft for a similar amount; and yet such subterfuges were pertinaciously insisted upon, as fair business transactions, sanctioned by custom and precedent. stock notes were given, for subscriptions to stock, and counted as specie, and thus not a cent of real capital actually existed, beyond the small sums paid in by the upright and unsuspecting farmer and mechanic, whose little savings and honest name were necessary to give confidence and credit. the notes of institutions thus constituted, were spread abroad upon the community, in every manner, and through every possible channel; property, produce, stock, farming utensils, every thing which the people of the country were tempted, by advanced prices, to dispose of, were purchased and paid for in paper, which was known by the utterers to be absolutely valueless. large amounts of notes were hypothecated for small advances, or loans of specie, to save appearances. quantities of paper were drawn out by exchange checks, that is to say, checked out of the banks, by individuals who had not a cent in bank, with no security, beyond the verbal understanding that notes of other banks should be returned, at some future time. the banking system at the time featured barrels of gold that were carried to other banks, just ahead of the inspectors: the singular spectacle was presented, of the officers of the state, seeking for banks in situations the most inaccessible and remote from trade, and finding at every step, an increase of labor, by the discovery of new and unknown organizations. before they could be arrested, the mischief was done; large issues were in circulation, and no adequate remedy for the evil. gold and silver flew about the country with the celerity of magic; its sound was heard in the depths of the forest, yet like the wind, one knew not whence it came or whither it was going. such were a few of the difficulties against which the commissioners had to contend. the vigilance of a regiment of them would have been scarcely adequate, against the host of bank emissaries, which scoured the country to anticipate their coming, and the indefatigable spies which hung upon their path, to which may be added perjuries, familiar as dicers’ oaths, to baffle investigation. bray hammond’s book elaborates on these stories: their cash reserves were sometimes kegs of nails and broken glass with a layer of coin on top. specie exhibited to the examiners at one bank was whisked through the trees to be exhibited at another the next day. banknotes increased liquidity— they helped value flow faster through the economy. who benefited from this increase in liquidity? mostly the fraudulent banknote issuers: it has been said, with some appearance of plausibility, that these banks have at least had the good effect of liquidating a large amount of debt. this may be true; but whose debts have they liquidated? those of the crafty and the speculative — and by whom? let every poor man, from his little clearing and log hut in the woods, make the emphatic response by holding up to view, as the rewards of his labor, a handful of promises to pay, which, for his purposes, are as valueless as a handful of the dry leaves at his feet. were this the extent of the evil, the indomitable energy and spirit of our population, who have so manfully endured it, would redeem the injury. but when it is considered how much injury is inflicted at home, by the sacrifice of many valuable farms, and the stain upon the credit of the state abroad, the remedy is neither so easy nor so obvious. when we reflect, too, that the laws are ineffective in punishing the successful swindler, and that the moral tone of society seems so far sunk as to surround and protect the dishonest and fraudulent with countenance and support, it imperatively demands that some legislative action should be had, to enable the prompt and rigorous enforcement of the laws, and the making severe examples of the guilty, no matter how protected and countenanced. passing around the corporate shell after you’ve scoured it has long been the fashion: so that the singular exhibition has been made of banks passing from hand to hand like a species of merchandize, each successive purchaser less conscientious than the preceding, and resorting to the most desperate measures for reimbursement on his speculation. the stablecoins of the day depreciated horribly, even while the institutions were still up and running, and it was the innocent members of the public stuck with the tokens who paid: under the present law, the order in which the means and securities are to be realized and exhausted, will protract the payment of their liabilities to an indefinite period, and make them utterly useless to the great body of the bill holders, whose daily necessities compel them to sell at an enormous loss. the banks themselves, through their agents, are thus enabled to buy up their circulation at an immense depreciation, and their debtors to pay their liabilities in the notes of banks, purchased at a great discount. the daily advertisements for the purchase of safety fund notes in exchange for land and goods, and the placards every where to be seen in the windows of merchants and brokers, is a sufficient argument for the necessity of the measure proposed. the commissioners pause here to examine the rationale for having free banking at all — principles of freedom, versus how it actually worked out in practice. they quote william m. gouge’s a short history of paper-money and banking in the united states, , p. : [google books] a reform will not be accomplished in banking, as some suppose, by granting charters to all who apply for them. it would be as rational to abolish political aristocracy, by multiplying the number of nobles. the one experiment has been tried in germany, the other in rhode island. competition in that which is essentially good, in farming, in manufactures, and in regular commerce, is productive of benefit ; but competition in that which is essentially evil, may not be desirable. no one has yet proposed to put an end to gambling by giving to every man the privilege of opening a gambling house. this story reminds me of recent stablecoin “attestations,” and documents waved at apparently credulous journalists: the farmers’ and mechanics’ bank of pontiac, presented a more favorable exhibit in point of solvency, but the undersigned having satisfactorily informed himself that a large proportion of the specie exhibited to the commissioners, at a previous examination, as the bona fide property of the bank, under the oath of the cashier, had been borrowed for the purpose of exhibition and deception; that the sum of ten thousand dollars which had been issued for “exchange purposes,” had not been entered on the books of the bank, reckoned among its circulation, or explained to the commissioners. what do you do with bankers so bad you can’t tell if they’re crooks or just bozos? you put their backsides in jail, with personal liability for making good: upon officially visiting the berrien county bank, the undersigned found its operations suspended by his predecessor, col. fitzgerald. on investigation of its affairs, with that gentleman, much was exhibited betraying either culpable mismanagement, or gross ignorance of banking. col. fitzgerald, however, with the usual vigilance and promptitude characteristic of all his official acts, had, previous to my arrival, caused the arrest of some of the officers of the institution, under the provisions of the act of december th, ; and required of the proprietors to furnish real estate securities to a considerable amount, conditioned to be released on the entire re-organization of the bank, and its being placed on a sound and permanent basis, or suffer a forfeiture of the lands pledged, which, together with their assets in bank, individual responsibility and the real estate security, given in conformity to law, must in the worst event, be more than sufficient to satisfy and pay all their liabilities. in crypto, not even the frauds are new. fortunately, we have suitable remedies to hand — such as the stable act. put the stablecoin issuers under firm regulation. your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbray hammondmichiganstablecoinunited states post navigation previous article news: twitter on the blockchain, the us coup on the blockchain, gensler in at the sec, ripple ex-cto loses bitcoin keys next article blockchain debate podcast: diem is a glorified paypal (david gerard vs. bryce weiner) one comment on “stablecoins through history — michigan bank commissioners report, ” steve brown says: th january at : am perhaps you’ll be interested in my article: how wildcat notes were distributed: the carpet-baggers the term ‘carpet-bagger’ does not refer to northerners migrating south after the civil war but refers to the fact that circulators of wildcat notes traveled with a carpet bag full of their dubious offerings. the idea was to transport notes from inaccessible town a to inaccessible town b where the distributor purchased whatever goods and livestock that he could, assuming that anyone from town b accepted the notes. when the notes get to town b’s bank the bank happily takes the notes for debt – but not for coin – and the bank pays the notes out to their unsuspecting customers in lieu of coin whenever and wherever possible. this way of business carries on until all of the carpet-bagger’s currency is in circulation in town b. [a reverse of this very situation is likely played out in the reverse direction, eg a town b carpet-bagger distributes bank b’s dubious notes in town a.] while the business of the town is carried on in this way quite legitimately for some period of time, eventually it becomes worthwhile for town b’s bank to replenish it’s specie simply to ensure it’s own liquidity when town b customers begin to demand coin, for example with a tightening of credit. in this case the town b bank opts for the very simple expedient of refusing to honor town a’s bank notes and forces holders to sell the notes at a significant discount to a local broker for specie. the local broker and the bank then split the difference with regard to these transactions and the bank then replenishes it’s coin. while the scenario described above seems complex, the underlying principle is quite simple and can be reduced in money terms to a usurious form of interest charged on the bank’s customers, where usury is an exorbitant or unlawful rate of interest. usury was illegal in all states until the marquette nat. bank of minneapolis v. first of omaha service corp’s supreme court decision reinstated the abuse in ; the modern analogy concerns credit swaps and is not illegal. the old credit swap system was very common during coin shortages when banks and their customers were forced to exercise their wits amidst an improvised local monetary crisis, and useful when paper notes could be exchanged for real goods and services, until credit tightening created demand for coin. a simpler but riskier proposition worked in larger metropolitan areas in the east, this idea was simply to transport notes from a far-off bank to a broker in another state who was willing to accept the notes at a significant discount. the risk here involved acceptance of the notes, as this was never guaranteed. as technology improved we can see how counterfeiting evolved into a significant industry by , but it was not counterfeiting alone that constricted the economy of the united states. the major impediment to financial progress was greed and self-interest within a developing capitalist society which was largely unregulated, and incapable of policing itself in order to ensure the ascendance of the common financial good, the problem addressed beginning with the treasury act of . in conclusion we see that the term ‘wildcat’ began with highly dubious banking practices engendered by coin shortages, poor communications and bad roads characteristic of the early nineteenth century. the end of the civil war, telegraphic communication and railroad development all occurred at approximately the same time, along with the beginning of the industrial revolution, and even though wildcat banks would disappear, dubious banking practices did not and wildcat mines, oil, and many other types of questionable investments would take the wildcat bank’s place in history with significant modern examples easily identifiable today. @newsypaperz reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. limits of resolution: the rayleigh criterion | physics skip to main content physics wave optics search for: limits of resolution: the rayleigh criterion learning objectives by the end of this section, you will be able to: discuss the rayleigh criterion. light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. while this can be used as a spectroscopic tool—a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra—diffraction also limits the detail we can obtain in images. figure  a shows the effect of passing light through a small circular aperture. instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. this pattern is caused by diffraction similar to that produced by a single slit. light from different parts of the circular aperture interferes constructively and destructively. the effect is most noticeable when the aperture is small, but the effect is there for large apertures, too. figure  . (a) monochromatic light passed through a small circular aperture produces this diffraction pattern. (b) two point light sources that are close to one another produce overlapping images because of diffraction. (c) if they are closer together, they cannot be resolved or distinguished. how does diffraction affect the detail that can be observed when light passes through an aperture? figure  b shows the diffraction pattern produced by two point light sources that are close to one another. the pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. if they were closer together, as in figure  c, we could not distinguish them, thus limiting the detail or resolution we can obtain. this limit is an inescapable consequence of the wave nature of light. there are many situations in which diffraction limits the resolution. the acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. thus light passing through a lens with a diameter d shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter d does. so diffraction limits the resolution of any system having a lens or mirror. telescopes are also limited by diffraction, because of the finite diameter d of their primary mirror. take-home experiment: resolution of the eye draw two lines on a white sheet of paper (several mm apart). how far away can you be and still distinguish the two lines? what does this tell you about the size of the eye’s pupil? can you be quantitative? (the size of an adult’s pupil is discussed in physics of the eye.) just what is the limit? to answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) (see figure  a). it can be shown that, for a circular aperture of diameter d, the first minimum in the diffraction pattern occurs at [latex]\theta= . \frac{\lambda}{d}\\[/latex] (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). the accepted criterion for determining the diffraction limit to resolution based on this angle was developed by lord rayleigh in the th century. the rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. see figure  b. the first minimum is at an angle of [latex]\theta= . \frac{\lambda}{d}\\[/latex], so that two point objects are just resolvable if they are separated by the angle [latex]\displaystyle\theta= . \frac{\lambda}{d}\\[/latex], where λ is the wavelength of light (or other electromagnetic radiation) and d is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. in this expression, θ has units of radians. figure  . (a) graph of intensity of the diffraction pattern for a circular aperture. note that, similar to a single slit, the central maximum is wider and brighter than those to the sides. (b) two point objects produce overlapping diffraction patterns. shown here is the rayleigh criterion for being just resolvable. the central maximum of one pattern lies on the first minimum of the other. making connections: limits to knowledge all attempts to observe the size and shape of objects are limited by the wavelength of the probe. even the small wavelength of light prohibits exact precision. when extremely small wavelength probes as with an electron microscope are used, the system is disturbed, still limiting our knowledge, much as making an electrical measurement alters a circuit. heisenberg’s uncertainty principle asserts that this limit is fundamental and inescapable, as we shall see in quantum mechanics. example . calculating diffraction limits of the hubble space telescope the primary mirror of the orbiting hubble space telescope has a diameter of . m. being in orbit, this telescope avoids the degrading effects of atmospheric distortion on its resolution. what is the angle between two just-resolvable point light sources (perhaps two stars)? assume an average light wavelength of nm. if these two stars are at the million light year distance of the andromeda galaxy, how close together can they be and still be resolved? (a light year, or ly, is the distance light travels in year.) strategy the rayleigh criterion stated in the equation [latex]\theta= . \frac{\lambda}{d}\\[/latex] gives the smallest possible angle θ between point sources, or the best obtainable resolution. once this angle is found, the distance between stars can be calculated, since we are given how far away they are. solution for part the rayleigh criterion for the minimum resolvable angle is [latex]\theta= . \frac{\lambda}{d}\\[/latex]. entering known values gives [latex]\begin{array}{lll}\theta&=& . \frac{ \times ^{- }\text{ m}}{ . \text{ m}}\\\text{ }&=& . \times ^{- }\text{ rad}\end{array}\\[/latex] solution for part the distance s between two objects a distance r away and separated by an angle θ is s = rθ. substituting known values gives [latex]\begin{array}{lll}s&=&\left( . \times ^ \text{ ly}\right)\left( . \times ^{- }\text{ rad}\right)\\\text{ }&=& . \text{ ly}\end{array}\\[/latex] discussion the angle found in part  is extraordinarily small (less than / , of a degree), because the primary mirror is so large compared with the wavelength of light. as noticed, diffraction effects are most noticeable when light interacts with objects having sizes on the order of the wavelength of light. however, the effect is still there, and there is a diffraction limit to what is observable. the actual resolution of the hubble telescope is not quite as good as that found here. as with all instruments, there are other effects, such as non-uniformities in mirrors or aberrations in lenses that further limit resolution. however, figure   gives an indication of the extent of the detail observable with the hubble because of its size and quality and especially because it is above the earth’s atmosphere. figure  . these two photographs of the m galaxy give an idea of the observable detail using the hubble space telescope compared with that using a ground-based telescope. (a) on the left is a ground-based image. (credit: ricnun, wikimedia commons) (b) the photo on the right was captured by hubble. (credit: nasa, esa, and the hubble heritage team (stsci/aura)) the answer in part  indicates that two stars separated by about half a light year can be resolved. the average distance between stars in a galaxy is on the order of light years in the outer parts and about light year near the galactic center. therefore, the hubble can resolve most of the individual stars in andromeda galaxy, even though it lies at such a huge distance that its light takes million years for its light to reach us. figure   shows another mirror used to observe radio waves from outer space. figure  . a -m-diameter natural bowl at arecibo in puerto rico is lined with reflective material, making it into a radio telescope. it is the largest curved focusing dish in the world. although d for arecibo is much larger than for the hubble telescope, it detects much longer wavelength radiation and its diffraction limit is significantly poorer than hubble’s. arecibo is still very useful, because important information is carried by radio waves that is not carried by visible light. (credit: tatyana temirbulatova, flickr) diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. any beam of light having a finite diameter d and a wavelength λ exhibits diffraction spreading. the beam spreads out with an angle θ given by the equation [latex]\theta= . \frac{\lambda}{d}\\[/latex]. take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to θ = º as possible) instead spreads out at an angle [latex]\theta= . \frac{\lambda}{d}\\[/latex], where d is the diameter of the beam and λ is its wavelength. this spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. however, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (see figure  ). to avoid this, we can increase d. this is done for laser light sent to the moon to measure its distance from the earth. the laser beam is expanded through a telescope to make d much larger and θ smaller. figure  . in figure   we see that the beam produced by this microwave transmission antenna will spread out at a minimum angle [latex]\theta= . \frac{\lambda}{d}\\[/latex] due to diffraction. it is impossible to produce a near-parallel beam, because the beam has a limited diameter. in most biology laboratories, resolution is presented when the use of the microscope is introduced. the ability of a lens to produce sharp images of two closely spaced point objects is called resolution. the smaller the distance x by which two objects can be separated and still be seen as distinct, the greater the resolution. the resolving power of a lens is defined as that distance x. an expression for resolving power is obtained from the rayleigh criterion. in figure  a we have two point objects separated by a distance x. according to the rayleigh criterion, resolution is possible when the minimum angular separation is [latex]\displaystyle\theta= . \frac{\lambda}{d}=\frac{x}{d}\\[/latex] where d is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that x is much smaller than d), so that tan θ ≈ sin θ ≈ θ. therefore, the resolving power is [latex]\displaystyle{x}= . \frac{\lambda{d}}{d}\\[/latex] figure  . (a) two points separated by at distance x and a positioned a distance d away from the objective. (credit: infopro, wikimedia commons) (b) terms and symbols used in discussion of resolving power for a lens and an object at point p. (credit: infopro, wikimedia commons) another way to look at this is by re-examining the concept of numerical aperture (na) discussed in microscopes. there, na is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. figure  b shows a lens and an object at point p. the na here is a measure of the ability of the lens to gather light and resolve fine detail. the angle subtended by the lens at its focus is defined to be θ = α. from the figure and again using the small angle approximation, we can write [latex]\displaystyle\sin\alpha=\frac{\frac{d}{ }}{d}=\frac{d}{ d}\\[/latex] the na for a lens is na = n sin α, where n is the index of refraction of the medium between the objective lens and the object at point p. from this definition for na, we can see that [latex]\displaystyle{x}= . \frac{\lambda{d}}{d}= . \frac{\lambda}{ \sin\alpha}= . \frac{\lambda{n}}{na}\\[/latex] in a microscope, na is important because it relates to the resolving power of a lens. a lens with a large na will be able to resolve finer details. lenses with larger na will also be able to collect more light and so give a brighter image. another way to describe this situation is that the larger the na, the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. thus the microscope has more information to form a clear image, and so its resolving power will be higher. one of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. consider focusing when only considering geometric optics, shown in figure  a. the focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the na of the objective lens. for wave optics, due to diffraction, the focal point spreads to become a focal spot (see figure  b) with the size of the spot decreasing with increasing na. consequently, the intensity in the focal spot increases with increasing na. the higher the na, the greater the chances of photodegrading the specimen. however, the spot never becomes a true point. figure  . (a) in geometric optics, the focus is a point, but it is not physically possible to produce such a point because it implies infinite intensity. (b) in wave optics, the focus is an extended region. section summary diffraction limits resolution. for a circular aperture, lens, or mirror, the rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. this occurs for two point objects separated by the angle [latex]\theta= . \frac{\lambda}{d}\\[/latex], where λ is the wavelength of light (or other electromagnetic radiation) and d is the diameter of the aperture, lens, mirror, etc. this equation also gives the angular spreading of a source of light having a diameter d. conceptual questions a beam of light always spreads out. why can a beam not be created with parallel rays to prevent spreading? why can lenses, mirrors, or apertures not be used to correct the spreading? problems & exercises the -m-diameter arecibo radio telescope pictured in figure   detects radio waves with a . cm average wavelength. (a) what is the angle between two just-resolvable point sources for this telescope? (b) how close together could these point sources be at the million light year distance of the andromeda galaxy? assuming the angular resolution found for the hubble telescope in example , what is the smallest detail that could be observed on the moon? diffraction spreading for a flashlight is insignificant compared with other limitations in its optics, such as spherical aberrations in its mirror. to show this, calculate the minimum angular spreading of a flashlight beam that is originally . cm in diameter with an average wavelength of nm. (a) what is the minimum angular spread of a -nm wavelength he-ne laser beam that is originally . mm in diameter? (b) if this laser is aimed at a mountain cliff . km away, how big will the illuminated spot be? (c) how big a spot would be illuminated on the moon, neglecting atmospheric effects? (this might be done to hit a corner reflector to measure the round-trip time and, hence, distance.) a telescope can be used to enlarge the diameter of a laser beam and limit diffraction spreading. the laser beam is sent through the telescope in opposite the normal direction and can then be projected onto a satellite or the moon. (a) if this is done with the mount wilson telescope, producing a . -m-diameter beam of -nm light, what is the minimum angular spread of the beam? (b) neglecting atmospheric effects, what is the size of the spot this beam would make on the moon, assuming a lunar distance of . × m? the limit to the eye’s acuity is actually related to diffraction by the pupil. (a) what is the angle between two just-resolvable points of light for a . -mm-diameter pupil, assuming an average wavelength of nm? (b) take your result to be the practical limit for the eye. what is the greatest possible distance a car can be from you if you can resolve its two headlights, given they are . m apart? (c) what is the distance between two just-resolvable points held at an arm’s length ( . m) from your eye? (d) how does your answer to (c) compare to details you normally observe in everyday circumstances? what is the minimum diameter mirror on a telescope that would allow you to see details as small as . km on the moon some , km away? assume an average wavelength of nm for the light received. you are told not to shoot until you see the whites of their eyes. if the eyes are separated by . cm and the diameter of your pupil is . mm, at what distance can you resolve the two eyes using light of wavelength nm? (a) the planet pluto and its moon charon are separated by , km. neglecting atmospheric effects, should the . -m-diameter mount palomar telescope be able to resolve these bodies when they are . × km from earth? assume an average wavelength of nm. (b) in actuality, it is just barely possible to discern that pluto and charon are separate bodies using an earth-based telescope. what are the reasons for this? the headlights of a car are . m apart. what is the maximum distance at which the eye can resolve these two headlights? take the pupil diameter to be . cm. when dots are placed on a page from a laser printer, they must be close enough so that you do not see the individual dots of ink. to do this, the separation of the dots must be less than raleigh’s criterion. take the pupil of the eye to be . mm and the distance from the paper to the eye of cm; find the minimum separation of two dots such that they cannot be resolved. how many dots per inch (dpi) does this correspond to? unreasonable results. an amateur astronomer wants to build a telescope with a diffraction limit that will allow him to see if there are people on the moons of jupiter. (a) what diameter mirror is needed to be able to see . m detail on a jovian moon at a distance of . × km from earth? the wavelength of light averages nm. (b) what is unreasonable about this result? (c) which assumptions are unreasonable or inconsistent? construct your own problem. consider diffraction limits for an electromagnetic wave interacting with a circular object. construct a problem in which you calculate the limit of angular resolution with a device, using this circular object (such as a lens, mirror, or antenna) to make observations. also calculate the limit to spatial resolution (such as the size of features observable on the moon) for observations at a specific distance from the device. among the things to be considered are the wavelength of electromagnetic radiation used, the size of the circular object, and the distance to the system or phenomenon being observed. glossary rayleigh criterion: two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other selected solutions to problems & exercises . (a)  . × − rad; (b)  ly .  . × − rad . (a)  . × − rad; (b) diameter of m .  . cm . (a) yes. should easily be able to discern; (b) the fact that it is just barely possible to discern that these are separate bodies indicates the severity of atmospheric aberrations. licenses and attributions cc licensed content, shared previously college physics. authored by: openstax college. located at: http://cnx.org/contents/ da d -b - c- cf- c ed a/college_physics. license: cc by: attribution. license terms: located at license previous next perpetual futures - wikipedia perpetual futures from wikipedia, the free encyclopedia jump to navigation jump to search in finance, a perpetual futures contract, also known as a perpetual swap, is an agreement to non-optionally buy or sell an asset at an unspecified point in the future. perpetual futures are cash-settled, and differ from regular futures in that they lack a pre-specified delivery date, and can thus be held indefinitely without the need to roll over contracts as they approach expiration. payments are periodically exchanged between holders of the two sides of the contracts, long and short, with the direction and magnitude of the settlement based on the difference between the contract price and that of the underlying asset, as well as, if applicable, the difference in leverage between the two sides. perpetual futures were first proposed by economist robert shiller in , to enable derivatives markets for illiquid assets.[ ] however, perpetual futures markets have only developed for cryptocurrencies, following their introduction in by bitmex.[ ][ ] cryptocurrency perpetuals are characterised by the availability of high leverage, sometimes over times the margin, and by the use of auto-deleveraging, which compels high-leverage, profitable traders to forfeit a portion of their profits to cover the losses of the other side during periods of high market volatility, as well as insurance funds, pools of assets intended to prevent the need for auto-deleveraging. perpetuals serve the same function as contracts for difference (cfds), allowing indefinite, leveraged tracking of an underlying asset or flow, but differ in that a single, uniform contract is traded on an exchange for all time-horizons, quantities of leverage, and positions, as opposed to separate contracts for separate quantities of leverage typically traded directly with a broker.[ ] contents history mechanism . as used in cryptocurrency markets . . liquidation and insurance funds . . auto-deleveraging see also references history[edit] holding a futures contract indefinitely requires periodically rolling over the contract into a new one before the contract's expiry. however, given that the price of futures typically differs from spot prices, rolling over contracts, particularly repeatedly, creates significant basis risk, leading to inefficiencies when used for hedging or speculation.[ ] in an attempt to remedy these ills, the chinese gold and silver exchange of hong kong developed an "undated futures" market, wherein one-day futures would be rolled over automatically, with the difference between future and spot prices settled between the counterparties.[ ] in , robert shiller proposed perpetual futures, alongside a method for generating asset-price indices using hedonic regression, accounting for unmeasured qualities by adding dummy variables that represent elements of the index, indicating the unique quality of each element, a form of repeated measures design. this was intended to permit the creation of derivatives markets for illiquid, infrequently-priced assets, such as single-family homes, as well as untraded indices and flows of income, such as labour costs or the consumer price index.[ ] the first significant uses of perpetual futures came in the form of cryptocurrency contracts, first offered by bitmex in may , that became popular amongst traders by permitting highly-leveraged trading at different time-horizions in a liquid market without inordinate counterparty risk in the absence of regulated intermediaries. as a result, $ . t in cryptocurrency derivatives were traded in the third quarter of , with a large majority of that volume consisting of perpetual futures.[ ][ ] mechanism[edit] perpetual futures for the value of a cash flow, dividend or index, as envisioned by shiller, require the payment of a daily settlement, intended to mirror the value of the flow, from one side of the contract to the other. at any day t, the dividend s t + {\displaystyle s_{t+ }} , paid from shorts to longs, is defined as: s t + = ( f t + − f t ) + ( d t + − r t f t ) {\displaystyle s_{t+ }=(f_{t+ }-f_{t})+(d_{t+ }-r_{t}f_{t})} where f t {\displaystyle f_{t}} is the price of the perpetual at day t, d t {\displaystyle d_{t}} is the dividend paid to owners of the underlying asset on day t, and r t {\displaystyle r_{t}} is the return on an alternative asset (expected to be a short-term, low-risk rate) between time t and t+ .[ ] as used in cryptocurrency markets[edit] the perpetual contracts offered by cryptocurrency derivative exchanges are typically priced by versions of the above formula, where the difference in cryptocurrency prices from one day to the other can be thought of as the dividend due to owners of the asset. however, a number of conventions have developed in the cryptocurrency perpetual market, due to greater volatility and the lack of regulatory requirements mandating a particular market structure. unlike traditional futures, cryptocurrency perpetuals typically use the cryptocurrency as their base currency, making them inverse futures.[ ] settlement between different sides of the trade, known as funding, typically occurs every eight hours. in addition, given the lack of a cryptocurrency repo market and, consequently, an overnight rate, the base interest on cryptocurrency perpetuals is usually a fixed percentage set by the exchange. liquidation and insurance funds[edit] cryptocurrency derivatives exchanges lack central counterparty clearing, intermediaries in regulated derivatives markets that take collateral and adjust margin requirements in an attempt to eliminate counterparty risk, the risk that holders of one side in the contract will fail to cover their obligations in the event of a margin call after an adverse price move. as a result, liquidation is performed before the one side of the contract reaches bankruptcy, when the contract's remaining margin reaches a pre-specified value (the maintenance margin, which determines the liquidation price). if the exchange is able to close out the contract at a profit, the proceeds are typically inserted into the exchange's insurance fund, which guarantees the profitable side when counterparty margin is insufficient, usually when the price of the asset has moved sharply in one direction and the exchange was unable to liquidate at a profit. auto-deleveraging[edit] if the exchange's insurance fund is depleted, whether globally or for a particular contract, accounts are ranked according to their profit and leverage. this is used to form an "auto-deleveraging" queue, where the positions of traders at the front of the queue are closed, at the bankruptcy price, to prevent market losers from going into default.[ ] auto-deleveraging is intended to reduce counterparty risk by penalising the riskiest traders, as opposed to the more evenly-spread "mutualised" model used by central clearing, and balance risk in a fully automated manner, requiring no manual discretion by intermediaries.[ ] see also[edit] futures contract contract for difference references[edit] ^ a b c shiller, robert j, . "measuring asset values for cash settlement in derivative markets: hedonic repeated measures indices and perpetual futures," journal of finance, american finance association, vol. ( ), pages - , july. ^ a b c alexander, carol; choi, jaehyuk; park, heungju; sohn, sungbin ( - - ). "bitmex bitcoin derivatives: price discovery, informational efficiency and hedging effectiveness". rochester, ny. doi: . /ssrn. . s cid  . ssrn  . cite journal requires |journal= (help) ^ alexander, carol; deng, jun; zou, bin ( - - ). "optimal hedging with margin constraints and default aversion and its application to bitcoin perpetual futures". arxiv: . [q-fin.rm]. ^ gardner, b. l. ( ). rollover hedging and missing long‐term futures markets. american journal of agricultural economics, ( ), - . ^ gehr jr, adam k. "undated futures markets." the journal of futures markets ( - ) , no. ( ): . ^ "derivatives' disparities: surveying the bitcoin perpetual swap market". coin metrics. - - . retrieved - - . ^ kraken intelligence ( - - ). "the tail wags the dog: an evolution of bitcoin futures". cite journal requires |journal= (help) ^ "what is auto-deleveraging (adl)?". bybit official help. retrieved - - . ^ "auto deleveraging examples - bitmex". www.bitmex.com. retrieved - - . v t e derivatives market derivative (finance) options terms credit spread debit spread exercise expiration moneyness open interest pin risk risk-free interest rate strike price synthetic position the greeks volatility vanillas american bond option call employee stock option european fixed income fx option styles put warrants exotics asian barrier basket binary chooser cliquet commodore compound forward start interest rate lookback mountain range rainbow swaption strategies collar covered call fence iron butterfly iron condor straddle strangle protective put risk reversal spreads back bear box bull butterfly calendar diagonal intermarket jelly roll ratio vertical valuation binomial black black–scholes finite difference garman–kohlhagen lattices margrabe put–call parity mc simulation real options trinomial vanna–volga swaps amortising asset basis conditional variance constant maturity correlation credit default currency dividend equity forex forward rate agreement inflation interest rate overnight indexed total return variance volatility year-on-year inflation-indexed zero coupon inflation-indexed zero coupon swap forwards futures contango commodities future currency future dividend future forward market forward price forwards pricing forward rate futures pricing interest rate future margin normal backwardation perpetual futures single-stock futures slippage stock market index future exotic derivatives commodity derivative energy derivative freight derivative inflation derivative property derivative weather derivative other derivatives collateralized debt obligation (cdo) constant proportion portfolio insurance contract for difference credit-linked note (cln) credit default option credit derivative equity-linked note (eln) equity derivative foreign exchange derivative fund derivative fund of funds interest rate derivative mortgage-backed security power reverse dual-currency note (prdc) market issues consumer debt corporate debt government debt great recession municipal debt tax policy retrieved from "https://en.wikipedia.org/w/index.php?title=perpetual_futures&oldid= " categories: derivatives (finance) futures markets hidden categories: cs errors: missing periodical navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages add links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement national bank act - wikipedia national bank act from wikipedia, the free encyclopedia jump to navigation jump to search primary federal legislation authorizing the creation of national banks in the us the first national bank in philadelphia the national banking acts of and were two united states federal banking acts that established a system of national banks, and created the united states national banking system. they encouraged development of a national currency backed by bank holdings of u.s. treasury securities and established the office of the comptroller of the currency as part of the united states department of the treasury and a system of nationally chartered banks. the act shaped today's national banking system and its support of a uniform u.s. banking policy. contents background national bank acts . national bank act of . national bank act of . national bank acts of and resurgence of state banks legacy see also notes further reading external links background[edit] at the end of the second bank of the united states in , the control of banking regimes devolved mostly to the states. different states adopted policies including a total ban on banking (as in wisconsin), a single state-chartered bank (as in indiana and illinois), limited chartering of banks (as in ohio), and free entry (as in new york).[ ] while the relative success of new york's "free banking" laws led a number of states to also adopt a free-entry banking regime, the system remained poorly integrated across state lines. though all banknotes were uniformly denominated in dollars, notes would often circulate at a steep discount in states beyond their issue. in the end, there were well-publicized frauds arising in states like michigan, which had adopted free entry regimes but did not require redeemability of bank issues for specie. the perception of dangerous "wildcat banking”, along with the poor integration of the u.s. banking system, led to increasing public support for a uniform national banking regime. the united states government, on the other hand, still had limited taxation capabilities, and so had an interest in the seigniorage potential of a national bank. in , the polk administration created a united states treasury system that moved public funds from private banks to treasury branches in order to fund the mexican–american war. however, without a national currency, the revenue generated this way was limited. this became more urgent during the civil war, when congress and lincoln were struggling to finance the war efforts.[ ] without a national mechanism for issuing currency, the lincoln could not exploit the powers and loopholes that, for example, britain could with its central bank, in order to finance the high expenses involved. previously, the damage that would be done to state banks by national competition was sufficient to prevent significant national bank chartering. but using the war crisis, lincoln was able to expand this effort. a "greenback" note issued during the civil war one of the first attempts to issue a national currency came in the early days of the civil war when congress approved the legal tender act of , allowing the issue of $ million in national notes known as greenbacks and mandating that paper money be issued and accepted in lieu of gold and silver coins. the bills were backed only by the national government's promise to redeem them and their value was dependent on public confidence in the government as well as the ability of the government to give out specie in exchange for the bills in the future. many thought this promise backing the bills was about as good as the green ink printed on one side, hence the name "greenbacks."[ ] the second legal tender act,[ ] enacted july , , a joint resolution of congress,[ ] and the third legal tender act,[ ] enacted march , , expanded the limit to $ million. the largest amount of greenbacks outstanding at any one time was calculated as $ , , . .[ ] the national bank act (ch. ,  stat.  ; february , ), originally known as the national currency act, was passed in the senate by a – vote, and was supplemented a year later by the national banking act of . the goals of these acts was to create a single national currency, a nationalized bank chartering system, and to raise money for the union war effort. the act established national banks that could issue notes which were backed by the united states treasury and printed by the government itself. the quantity of notes that a bank was allowed to issue was proportional to the bank's level of capital deposited with the comptroller of the currency at the treasury. to further control the currency, the act taxed notes issued by state and local banks, essentially pushing non-federally issued paper out of circulation.[ ] since the establishment of the republic, state governments had held authority to regulate banks. before the act, state legislatures typically issued bank charters on a case-by-case basis, taking into consideration whether the area needed a new bank, and if the applicant was of good moral standing. as this system could be subject to corruption, states began passing "free banking" laws in , which meant that any applicant who filled out the correct paperwork and deposited an in-kind payment to the state would be granted a charter. by the s, over half of states had such a law on the books. however, the national banking act of (ch. ,  stat.  ; june , ) brought a close to the issue by establishing federally-issued bank charters, which took banking out of the hands of state governments.[ ][ ] the first bank to receive a national charter was the first national bank of philadelphia, pennsylvania (charter # ).[ ] the first new national bank to open was the first national bank of davenport, iowa (charter # ).[citation needed] additionally, the new act converted more than , state banks to national banks.[citation needed] national bank acts[edit] $ national bank note national bank act of [edit] the national bank act of was passed on february th, , and was the first attempt to establish a federal banking system after the failures of the first and second banks of the united states, and served as the predecessor to the federal reserve act of .[ ][ ] the act allowed the creation of national banks, set out a plan for establishing a national currency backed by government securities held by other banks, and gave the federal government the ability to sell war bonds and securities (in order to help the war effort). national banks were chartered by the federal government, and were subject to stricter regulation; they had higher capital requirements and were not allowed to loan more than % of their holdings. a high tax on state banks was levied to discourage competition, and by most state banks had either received national charters or collapsed. [ ] national bank act of [edit] the act, based on a new york state law, brought the federal government into active supervision of commercial banks. it established the office of the comptroller of the currency with the responsibility of chartering, examining and supervising all national banks. national bank acts of and [edit] further acts passed in and imposed a tax to speed the adoption of the system. all banks (national or otherwise) had to pay a % tax on payments that they made in currency notes other than national bank notes. the tax rate was intentionally set so high as to effectively prohibit further circulation of state bank and private notes. by this time the conversion from state banks to national banks was well underway. the constitutionality of the tax came before the supreme court in veazie bank v. fenno, a case by a state-chartered maine bank and the collector of internal revenue. the court ruled – in favor of the government. state banks declined until the s, when the growing popularity of checks and the declining profitability of national bank currency issues caused a resurgence. resurgence of state banks[edit] the granting of charters led to the creation of many national banks and a national banking system which grew at a fast pace. the number of national banks rose from immediately after the act to , in .[citation needed] initially, this rise in national banking came at the expense of state banking—the number of state banks dwindled from , in to in .[citation needed] though state banks were no longer allowed to issue notes, local bankers took advantage of less strict capital requirements ($ , for state banks vs. $ , – , for national banks) and opened new branches en masse. these new state banks then served as competition for national banks, growing to , in number by .[citation needed] the years leading up to the passing of the % tax on banknotes consisted of events surrounding the national banking act of . during this time period, hugh mcculloch was determined to "fight against the national banking legislation, which he rightly perceived as a threat to state-chartered banking. although he tried to block the system's creation, he [mcculloch] was not determined to be its champion."[citation needed] part of his plans to revamp this portion of the banking system included hiring a new staff, being hands-on with several aspects such as "personally evaluating applications for bank charters and consoled prospective bankers", and "assisting in the design of the new national bank notes, and arranged for their engraving, printing, and distribution." as an end result of mcculloch's efforts, many banks were just not willing to conform to his system of operations. this prompted congress to pass "a percent tax on the notes of state banks, signaling its determination that national banks would triumph and the state banks would fade away." a later act, passed on march , , imposed a tax of percent on the notes of state banks to take effect on july , . similar to previous taxes, this effectively forced all non-federal currency from circulation. it also resulted in the creation of demand deposit accounts, and encouraged banks to join the national system, increasing the number of national banks substantially.[ ] legacy[edit] the national banking acts served to create the (federal-state) dual structure that is now a defining characteristic of the u.s. banking system and economy. the comptroller of the currency continues to have significance in the u.s. economy and is responsible for administration and supervision of national banks as well as certain activities of bank subsidiaries (per the gramm-leach-bliley act of ).[ ] in the act was used by john d. hawke, jr., comptroller of the currency, to effectively bar state attorneys general from national bank oversight and regulatory roles. many blame the resulting lack of oversight and regulation for the late- s recession, the bailout of the u.s. financial system and the subprime mortgage crisis.[ ] see also[edit] banks portal us banking law cuomo v. clearing house ass'n, l.l.c. notes[edit] ^ dowd, kevin ( ). "us banking in the 'free banking' period". in dowd, kevin (ed.). the experience of free banking. isbn  . ^ lincoln and the founding of the national banking system ^ a b c d gale encyclopedia of u.s. economic history. detroit: gale, . ^ ch. ,  stat.  ^ united states congress. resolution of january , , no. . washington d.c.: ^ ch. ,  stat.  ^ backus, charles k. ( ), the contraction of the currency, chicago, ill.: the honest money league of the northwest – via google books ^ a b grossman, richard s. ( ), u.s. banking history, civil war to wwii, economic history services, archived from the original on - - ^ the north american ( ). philadelphia and popular philadelphians. philadelphia: the american printing house. p.  – via google books. ^ a b dieterle, david a.; simmons, katherine m. ( ). government and the economy: an encyclopedia. abc-clio. pp.  – . isbn  . retrieved may . ^ mason, james e. ( ). the transformation of commercial banking in the united states, - . routledge. p.  . isbn  . ^ berner, robert & grow, brian (october , ). "they warned us about the mortgage crisis". business week. further reading[edit] allen, larry ( ). the encyclopedia of money ( nd ed.). santa barbara, ca: abc-clio. pp.  – . isbn  - . friedman, milton; schwartz, anna j. ( ). a monetary history of the united states, – . pp.  – . isbn  - . niven, john ( ). samuel p. chase: a biography. new york: oxford university press. pp.  – . isbn  - . external links[edit] national-bank act as amended, the federal reserve act and other laws relating to national banks. february, : document compiled under the direction of the comptroller of the currency for the use of the senate, providing dates of acts relating to national banks, – , text of the acts and amendments, and indexes. an act to provide a national currency, secured by a pledge of united states bonds, and to provide for the circulation and redemption thereof, act of june , , national bank act of , national banking act of an act to provide for a national currency, secured by a pledge of united states stocks, and to provide for the circulation and redemption thereof, act of february , , national bank act, national banking act, national currency act the library of congress history of the office of the comptroller of the currency history of banking at about.com v t e bank regulation in the united states federal authorities consumer financial protection bureau farm credit administration federal deposit insurance corporation federal financial institutions examination council federal housing finance agency federal reserve board of governors financial stability oversight council national credit union administration office of the comptroller of the currency major federal legislation independent treasury act national bank act federal reserve act mcfadden act banking act glass–steagall act federal credit union act bank holding company act interest rate control act of truth in lending act bank secrecy act fair credit reporting act home mortgage disclosure act community reinvestment act electronic fund transfer act financial institutions regulatory and interest rate control act of monetary control act depository institutions act competitive equality banking act of firrea fdicia truth in savings act riegle-neal ibbea gramm–leach–bliley act fair and accurate credit transactions act emergency economic stabilization act credit card act dodd-frank federal reserve board regulations extensions of credit by federal reserve banks (reg a) equal credit opportunity (reg b) home mortgage disclosure (reg c) reserve requirements for depository institutions (reg d) electronic fund transfer (reg e) limitations on interbank liabilities (reg f) international banking operations (reg k) consumer leasing (reg m) loans to insiders (reg o) privacy of consumer financial information (reg p) prohibition against the paying of interest on demand deposits (reg q) credit by brokers and dealers (reg t) credit by banks and persons other than brokers or dealers for the purpose of purchasing or carrying margin stock (reg u) transactions between member banks and their affiliates (reg w) borrowers of securities credit (reg x) truth in lending (reg z) unfair or deceptive acts or practices (reg aa) community reinvestment (reg bb) availability of funds and collection of checks (reg cc) truth in savings (reg dd) types of bank charter credit union federal savings association national bank state bank state authorities california colorado florida illinois maryland michigan new jersey new york ohio oklahoma oregon pennsylvania tennessee virginia terms call report cael rating camels rating system thrift financial report other topics banking in the united states fair debt collection history of central banking in the united states wildcat banking category economics portal banks portal v t e federal reserve system federal open market committee (fomc) chair vice chair governors federal reserve banks banknotes federal reserve note federal reserve bank note reports beige book federal reserve statistical release monetary policy report to the congress federal funds discount window bank rate federal funds federal funds rate overnight rate primary dealer history (antecedents) – aldrich–vreeland act ( ) federal reserve act ( ) pittman act ( ) post–world war i recession ( – ) – depression roaring twenties ( – ) wall street crash of great depression ( – ) regulation d (c. ) panic of emergency banking act ( ) regulation q ( ) glass–steagall act ( ) gold reserve act ( ) – recession – bretton woods system ( – ) employment act of recession u.s. treasury department accord ( ) recession bank holding company act ( ) recession – recession fomc actions ( –present) kennedy slide of truth in lending act ( ) – recession nixon shock ( ) smithsonian agreement ( ) – – stock market crash – recession equal credit opportunity act ( ) home mortgage disclosure act ( ) community reinvestment act ( ) federal reserve reform act ( ) electronic fund transfer act ( ) humphrey–hawkins full employment act ( ) didmc act ( ) early s recession ( ; – ) federal reserve v. investment co. institute ( ) – northeast bancorp v. federal reserve ( ) savings and loan crisis ( – ) black monday ( ) greenspan put expedited funds availability act ( ) firre act ( ) friday the th mini-crash ( ) early s recession ( – ) fdic improvement act ( ) s u.s. economic expansion ( – ) dot-com bubble ( – ) mini-crash gramm–leach–bliley act ( ) recession / stock market crash ( ) stock market downturn u.s. housing market correction ( – ) – great recession ( – ) – u.s. bear market – financial crisis subprime mortgage crisis responses ( – ) emergency economic stabilization act ( ) unfair or deceptive acts or practices ( ) commercial paper funding facility ( – ) primary dealer credit facility ( – ) bloomberg v. federal reserve ( ) supervisory capital assessment program term asset-backed securities loan facility ( – ) public–private investment program for legacy assets ( –) flash crash dodd–frank act ( ) august stock markets fall – stock market selloff covid- economic impact ( ) stock market crash commercial paper funding facility ( – ) chairs charles s. hamlin ( – ) william p. g. harding ( – ) daniel r. crissinger ( – ) roy a. young ( – ) eugene meyer ( – ) eugene r. black ( – ) marriner s. eccles ( – ) thomas b. mccabe ( – ) william m. martin ( – ) arthur f. burns ( – ) g. william miller ( – ) paul volcker ( – ) alan greenspan ( – ) ben bernanke ( – ) janet yellen ( – ) jerome powell ( –present) current governors jerome powell (chair) richard clarida (vice chair) randal quarles (vice chair for supervision) lael brainard michelle bowman christopher waller seat vacant current presidents (by district) eric s. rosengren (boston) john c. williams (new york) patrick t. harker (philadelphia) loretta j. mester (cleveland) thomas barkin (richmond) raphael bostic (atlanta) charles l. evans (chicago) james b. bullard (st. louis) neel kashkari (minneapolis) esther george (kansas city) robert steven kaplan (dallas) mary c. daly (san francisco) related central bank criticism of the federal reserve fed model fedspeak retrieved from "https://en.wikipedia.org/w/index.php?title=national_bank_act&oldid= " categories: in american politics in law th united states congress united states federal banking legislation hidden categories: articles with short description short description is different from wikidata all articles with unsourced statements articles with unsourced statements from march articles with unsourced statements from august articles with unsourced statements from october navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages português Русский svenska edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it th december nd march - by david gerard - comments. tether is a us dollar substitute token, issued by tether inc., an associate of cryptocurrency exchange bitfinex. tether is a favourite of the crypto trading markets — it moves at the speed of crypto, without all that tedious regulation and monitoring that actual dollars attract. also, exchanges that are too dodgy to get dollar banking can use tethers instead. bryce weiner has written a good overview of how tether works in relation to the cryptocurrency industry. his piece says nothing that any regular reader of this blog won’t already know, but he says it quite well. [medium] weiner’s thesis: the whole crypto industry depends on tether staying up. tether is too big to fail. the purpose of the crypto industry, and all its little service sub-industries, is to generate a narrative that will maintain and enhance the flow of actual dollars from suckers, and keep the party going. increasing quantities of tethers are required to make this happen. we just topped twenty billion alleged dollars’ worth of tethers, sixteen billion of those just since march . if you think this is sustainable, you’re a fool.   bitcoin-tether volumes are now magnitudes greater than bitcoin-dollar volumes. #btc #usdt pic.twitter.com/zizueitkwr — kaiko (@kaikodata) december ,   pump it! does crypto really need tether? ask the trading market — stablecoins are overwhelmingly tethers by volume. i suspect the other stablecoins are just a bit too regulated for the gamblers. tether is functionally unregulated. in fact, the whole crypto market is overwhelmingly tethers by volume — tether has more trading volume than the next three coins, bitcoin, ethereum and xrp, combined. [decrypt] in march, when the price of bitcoin crashed, a pile of exchanges and whales (large holders) put money, or maybe bitcoins, into tether to keep the system afloat. at least some percentage of the backing for tethers might exist! (all are now complicit in a manner that would be discoverable in court.) this is why crypto is so up in arms about the proposed stable act, which would require stablecoin issuers to become banks — the act would take out tether immediately, given tether’s extensive and judicially-recognised connections to new york state, and there is no way on earth that a company that comports itself in the manner of tether is getting a banking charter. (the tether printer was quiet for about a week after the stable act came out — and the price of bitcoin slid down about $ , .) the bitcoin price is visibly pumped by releases of tethers — particularly on weekends. i find it strangely difficult to believe real-money institutional investors are so keen to get on the phone to tether on a saturday.   a snapshot of coinbase btc/usd this weekend. spot when tethers were deployed on binance or huobi.   clap if you believe in tether but it’s far wider than the traders. every person making money from crypto — not just bitcoin, but everything else touching crypto — is painfully aware they need tether to keep the market pumped. all the little service sub-industries are vested in making crypto look real — and not just handwaving nonsense held up by a hilariously obvious fraud. so tether must be propped up, at all cost — in the face of behaviour that, at any real financial institution, would have had the air-raid sirens going off long ago. the big question is: what are all those tethers backed by? tether used to confidently claim that every tether was backed by a dollar held in a bank account. this turned out not to be the case — so now tethers are backed by dollars, or perhaps bitcoins, or loans, or maybe hot air. various unfortunately gullible journalists have embarrassed themselves by taking what tether tells them at face value. matthew leising from bloomberg confidently declared in december , based on documents supplied to him by tether, that tether seemed fully backed and solvent! [bloomberg] then four months later, the new york attorney general revealed that tether had admitted to them that tethers were no more than % backed, and the backing had failed in october   — tether had snowed leising. i don’t recall leising ever speaking of this again, even to walk back his claim. larry cermak from the block fell for the same trick recently, from tether and from people working closely with tether. [twitter] bitfinex/tether was supposed to give the nyag a pile of documents by now. the nyag is talking to the companies about document production, and just what documents they do and don’t have — so proceedings have been delayed until january . [letter, pdf]     the end game dan davies’ book lying for money (uk, us) talks about the life cycle of frauds. a fraud may start small — but it has to keep growing, to cover up the earlier fraud. so a fraud will grow until it can’t. i did a chat with the ft alphaville unofficial telegram a few weeks ago. someone asked a great question: “who’s going to make out like a bandit when/if bitcoin collapses?” most scam collapses involve someone taking all the money. in the case of bitcoin and tether, i think the answer is … nobody. a whole lot of imaginary value just vanishes, like morning dew. i can’t think of a way for tether or the whales to exit scam with a large pile of actual dollars — because a shortage of actual dollars is crypto’s whole problem. i mean, i’m sure someone will do well. but there’s no locked-up pile of money to plunder. crypto “market cap” is a marketing fiction — there’s very little realisable money there. what was the “market cap” of beanie babies in ? imaginary, that’s what. so how does this end? the main ways out i see are nyag or the cftc finally getting around to doing something. either of those are a long way off — because regulators move at the speed of regulators. even the nyag proceeding is just an investigation at this stage, not a case as such. everyone in crypto’s service industries has a job that’s backed by the whales. perhaps the whales will keep funding stuff? i’m not counting on it, given all the redundancies and shut-down projects over and . this will keep going until it can’t. remember that it took seventeen years to take down bernie madoff. he got institutional buyers in, too.   past asset bubble veteran inch the inchworm, his ty slightly askew, hitting the sauce after seeing his crypto portfolio   your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbryce weinerlarry cermakmatthew leisingnew yorktether post navigation previous article facebook’s libra is now diem; stable act says stablecoins must get banking licenses next article news: coinbase key holders leave, ethereum . slashing, libra may not become diem, regulatory clarity in france comments on “tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it” brendan says: th december at : pm tether’s design is so flawed it has to fail and when it does it will take btc and the rest of crypto with it. i predicted this over years ago and got out of crypto and into bsv – the real bitcoin which is not dependent on scammy fraudsters running a counterfeit usd operation to pump the price. reply david gerard says: th december at : am they had us in the second half meme reply eloi says: th december at : am bsv is crypto… reply mark chamberlain says: th december at : am it would be interesting to know if there are any institutional or hedge fund players who understand this enough to be short crypto. plus how would it work? if you borrow the currency to sell it short and the price goes to zero and stops trading, how do you close your position? reply dylan says: th december at : pm this depends on a reliable way of shorting crypto, and i wouldn’t trust any of the existing exchanges enough to give money to them (unless i was ok losing that money). and i trust the “trustless” smart contract tools for doing this sort of thing even less. plus, you’d have to be confident about when the crypto market explodes, which is difficult to call given how manipulated it is. the market can stay irrational longer than you can stay solvent. reply david gerard says: th december at : pm yeah, in crypto the platforms themselves are part of the threat model. i expect you could deal with, say, cme in confidence. reply mark chamberlain says: th december at : pm the news today is about how the shorts were sold out in this new move. the better idea might be to short the new index fund bitw (if i was crazy enough to try). it’s now trading at an insane premium to nav, when you can instead just buy bitcoin itself through paypal. the kind of thing that happens at blow off tops…. reply greg allen says: th december at : pm not really – see articles on alphaville etc post-lehman, on how the move to mandate exchange settlement of derivatives just moved the counterparty risk from other banks to a single centralized exchange. i wouldn’t assume the exchange is sufficiently capitalized to help, and i wouldn’t even expect it to be on the hook for the other side of your futures contract reply massimo says: th january at : am interesting thesis, could you deepen the subject please? mark bloomfield says: th january at : pm please keep on at this: the message needs to get out. reply sesang says: th january at : am nice read. ty reply david says: th january at : pm excellent article. there’s no worst blind that those who don’t want to see. reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. information technology and libraries skip to main content skip to main navigation menu skip to site footer current archives announcements about about the journal editorial team submissions contact privacy statement search search register login current issue vol no ( ) published: - - editorials improving ital's peer review letter from the editor kenneth j. varnum pdf making room for change through rest margaret heller pdf do space’s virtual interview lab using simple technology to serve the public in a time of crisis michael p. sauers pdf service barometers using lending kiosks to locate patrons will yarbrough pdf articles the impact of covid- on the use of academic library resources ruth sara connell, lisa c. wallis, david comeaux pdf emergency remote library instruction and tech tools a matter of equity during a pandemic kathia ibacache, amanda rybin, eric vance pdf off-campus access to licensed online resources through shibboleth dr. francis jayakanth, dr. anand t. byrappa, mr. raja visvanathan pdf a framework for measuring relevancy in discovery environments blake l. galbreath, alex merrill, corey m. johnson pdf beyond viaf wikidata as a complementary tool for authority control in libraries carlo bianchini, stefano bargioni, camillo carlo pellizzari di san girolamo pdf algorithmic literacy and the role for libraries michael ridley, danica pawlick-potts pdf persistent urls and citations offered for digital objects by digital libraries nicholas homenda pdf view all issues open journal systems information for readers for authors for librarians current issue register your interest: open knowledge justice programme community meetups register your interest: open knowledge justice programme community meetups the open knowledge justice programme is kicking off a series of free, monthly community meetups to talk about public impact algorithms. do you want to learn more about pias, what they are, how to spot them, and how they may affect your clients? join us: to listen to each other and our guest speakers; perhaps to share and learn about this fast-changing issue. when? lunch time every second thursday of the month how? register your interest using the form below more info: www.thejusticeprogramme.org/community * required name * your answer email * your answer affiliation organisation you're working with. if not affiliated to any organisation, please write 'no affiliation' your answer are you a... * barrister solicitor student academic civil society organisation other: country * your answer why are you interested in joining the community meetups? * your answer sign up to be added to the open knowledge justice programme mailing list. you will receive an occasional newsletter with curated news on public impact algorithms and announcements of upcoming trainings. please add me to the mailing list anything else you would like to share? your answer i agree with open knowledge foundation retaining the provided personal information in order to communicate with me as part of the open knowledge justice programme * we only retain minimal information for the purpose of facilitating the community meetups and share updates on the programme. agreeing to this use of your data allows us to send you a calendar invite and register your name and email in the list of participants. yes required submit never submit passwords through google forms. this form was created inside of open knowledge foundation.  forms     エロセフレ作り日記 エロセフレ作り日記 セフレ募集に向いてない女性!セフレ作りのときに注意すべきタイプとは? 出会い系サイトを使ってセフレ募集する時には、注意しなければいけないタイプがいます。 後で面倒なことにならないために、最初から危険な香りのする女性には手を出す前のが良いです。 どんなタイプの女性に気をつけるべきなのでしょう [&# ;] セフレ作りのためのプロフィールはどうすればいいの?コツを伝授します 出会い系サイトでセフレ募集をするために、力を入れておきたいのがプロフィールの作成です。メッセージを送った女性から返信を貰えるかどうかは、プロフィール次第となることも多いものです。そんなプロフィール作成のポイントを、簡単に [&# ;] セフレが作れない!出会い系サイトで出会えない男性が見直すべきポイント 出会い系サイトを利用してセフレ募集しているのに、なかなかセフレができないという人がいます。 そんな男性が陥っている原因について具体的に挙げますので、当てはまっている場合は改善してください。 注意点を守れば出会い系サイトで [&# ;] セフレにできる狙い目女性は?出会い系サイトでアプローチすべき女性を紹介! わがままは言わないので出会い系サイトで絶対にセフレ募集したいという人は、どんな女性をターゲットにすべきでしょうか? 年齢や特徴などからセフレになりやすい人を選んで、アプローチしていきましょう。 具体的な戦略を挙げていきま [&# ;] jean leon gerome ferris - wikipedia jean leon gerome ferris from wikipedia, the free encyclopedia jump to navigation jump to search american painter jean leon gerome ferris the first thanksgiving born ( - - )august , philadelphia, pennsylvania died march , ( - - ) (aged  ) philadelphia, pennsylvania nationality american education pennsylvania academy of the fine arts known for painting writing the declaration of independence, , ferris's idealized depiction of (left to right) benjamin franklin, john adams, and thomas jefferson of the committee of five working on the declaration, was widely reprinted. the landing of william penn jean leon gerome ferris (august , – march , [ ]) was an american painter best known for his series of scenes from american history, entitled the pageant of a nation, the largest series of american historical paintings by a single artist.[ ] life and career[edit] ferris was born in philadelphia, pennsylvania, the son of stephen james ferris, a portrait painter who was a devotee of jean-léon gérôme (after whom he was named) and also an admirer of mariano fortuny.[ ] he grew up around art; he was trained by his father[ ] and his uncles edward moran and thomas moran were both acclaimed painters.[ ] ferris enrolled in the pennsylvania academy of the fine arts in and trained further at the académie julian beginning in under william-adolphe bouguereau.[ ] he also met his namesake jean-léon gérôme, who greatly influenced his decision to paint scenes from american history. ferris wrote, "[gérôme's] axiom was that one would paint best that with which he is most familiar".[ ] his early subjects were orientalist in nature since that movement was in vogue when he was young. in , he exhibited a painting entitled feeding the ibis which was valued at $ .[ ] by , he had gained a reputation as a historical painter, and he embarked on his dream of creating a series of paintings that told a historical narrative. in , he sold general howe's levee, , but he later realized that such a series could not be complete if the separate paintings could not be kept together. consequently, he never sold another, but he did sell the reproduction rights to various publishing companies. this had the effect of greatly popularizing his work, as these companies made prints, postcards, calendars, and blank-backed trade cards to use in advertisements. laminated cards of these works were still being sold as late as .[ ] ferris married annette amelia ryder in , and the couple had a daughter named elizabeth mary.[ ] he died in philadelphia in . the paintings showed idealized portrayals of famous moments from american history. the complete series was shown at independence hall in philadelphia from to , then moved next door to congress hall. in later years, it was shown in a number of locations, including the smithsonian institution, before being returned to the ferris family.[ ] his works were widely popular for many years, but modern critics are far less generous in their praise. the american philosophical society claims that his historical paintings confuse "verity with verisimilitude",[ ] and art historian gerald ackerman describes them as "splendid in the accuracy of accessories, clothing and especially in the details of land conveyances and ships", but "extremely dry in execution and rather monotonous in composition."[ ] references[edit] ^ sponsel, rudolf. "j.l.g. ferris ( – )" (in german). allgemeine und integrative psychotherapie. retrieved - - . ^ a b c d nuhn, roy (april ). "j. l. g. ferris". the antique shoppe newspaper. retrieved - - . ^ "the ferris collection". building a national collection. smithsonian institution. retrieved - - . ^ a b mitnick, barbara j. ( ). "paintings for the people". in ayres, william (ed.). picturing history: american painting – . rizzoli international publications. pp.  – . isbn  - - - . ^ a b c ackerman, gerald m. ( ). american orientalists. acr edition. pp.  – . isbn  - - - . ^ the national cyclopaedia of american biography, volume . j. t. white co. . p.  . ^ fanelli, doris devine; diethorn, karie ( ). history of the portrait collection, independence national historical park. american philosophical society. p.  . isbn  - - - . external links[edit] media related to jean leon gerome ferris at wikimedia commons authority control general isni viaf worldcat national libraries united states art research institutes artist names (getty) other faceted application of subject terminology social networks and archival context retrieved from "https://en.wikipedia.org/w/index.php?title=jean_leon_gerome_ferris&oldid= " categories: births deaths th-century american painters th-century american painters alumni of the académie julian american male painters artists from philadelphia orientalist painters pennsylvania academy of the fine arts alumni hidden categories: cs german-language sources (de) articles with short description short description is different from wikidata articles with hcards commons category link from wikidata wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with ulan identifiers wikipedia articles with fast identifiers wikipedia articles with snac-id identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages azərbaycanca esperanto فارسی français nederlands português Русский simple english svenska edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement number go up! new bitcoin peak, exactly three years after the last — what’s happening here – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu number go up! new bitcoin peak, exactly three years after the last — what’s happening here th december th december - by david gerard - comment so firstly, i called it two years ago:   i'd expect the bubbly headlines to start again - , it feels to me like that's about how long it will take to grow a fresh crop of greater fools. in general – there's always going to be people who are desperate to buy into this year's version of ostrich farming. — david gerard 🐍👑 (@davidgerard) november ,   not that this was any feat of prognostication. bitcoin is a speculative commodity without a use case — there is nothing it can do, other than bubble, or fail to be bubbling. it was always destined to stagger along, and then jump at some point, almost certainly for stupid reasons. remember that the overarching goal of the entire crypto industry is to get those rare actual-dollars flowing in again. that’s what this is about. we saw about million tethers being lined up on binance and huobi in the week previously. these were then deployed en masse.     pump it! you can see the pump starting at : utc on december. btc was $ , . on coinbase at : utc. notice the very long candles, as bots set to sell at $ , sell directly into the pump. tether hit billion, i hit , twitter followers, btc hit $ , . it is clearly as jfk jr. foretold. a series of peaks followed, as the pumpers competed with bagholders finally taking their chance to cash out — including $ , , at : utc december, $ , . precisely at : utc december, and the peak as i write this, $ , . precisely at : utc december. this was exactly three years after the previous high of $ , . on december . [fortune]   coinbase btc/usd chart at the moment of the peak. dull, isn’t it?   can you cash out? the coinbase chart showed quite a lot of profit-taking, as bagholders could finally dump. when you see a pump immediately followed by a drop, that’s what’s happening. approximately zero of the people saying “best investment of the last decade” got in a decade ago — they bought at a peak, and are deliriously happy that they can finally cash out. seriously: if you have bitcoin bags, this is the time to at least make up your cost basis — sell enough btc to make up the actualmoney you put in. then the rest is free money. binance and coinbase showed the quality we’ve come to expect of cryptocurrency exchanges — where you can make them fall over by using them. but i’m sure coinbase will get right onto letting you have your actual dollars. [cointelegraph]     so does everyone love bitcoin now? not on the evidence. the google chart is still dead — see above. that peak in is what genuine organic retail interest looks like. this isn’t that, at all. retail still isn’t diving into bitcoin — even with michael saylor at microstrategy and jack dorsey at square spending corporate funds on marketing bitcoin as an investment for actual institutions, and getting holders at hedge funds to do the same. but the marketing will continue. remember that there’s a lot of stories happening in crypto right now. bitmex is still up, but arthur hayes is a fugitive. the department of justice accuses bitmex — and specifically hayes — of trading directly against their own customers. ifinex (bitfinex/tether) are woefully short of the documents that the new york attorney general subpoenaed and that they’ve been fighting against producing for two years now, and that they must produce by january . remember: if ifinex had the documents, they’d have submitted them by now. so there’s a pile of people in trouble — who have coins they need to dump. (and john mcafee did promise “i will eat my dick on national television” if btc didn’t hit $ , within three years of . so maybe this is a last ditch mcafee penis pump.) [the dickening] a pumped bitcoin peak is just one story among many going on right now — and a completely expected one. update: here’s the smoking gun that this was a  coordinated pump fueled by stablecoins — different addresses trying to deposit stablecoins to exchanges in one block of transactions on ethereum, just a few minutes before the first price peak. [twitter]   lots of people deposited stablecoins to exchanges mins before breaking $ k. price is all about consensus. i guess the sentiment turned around to buy $btc at that time. this indicator is helpful to see buying power. set alert 👉 https://t.co/xjy mvaa pic.twitter.com/gv j n r g — ki young ju 주기영 (@ki_young_ju) december ,   start the new year by finding a way to create a little joy, no matter how small or fleeting pic.twitter.com/fiicb bw c — foone (@foone) january , your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbinancebitcoincoinbasenumber go uptrading post navigation previous article me in bitcoinblog.de: “bitcoin piles up so many layers of ideas that are bad or wrong or just don’t work.” next article sec sues ripple labs, claiming xrp is a security one comment on “number go up! new bitcoin peak, exactly three years after the last — what’s happening here” sherman mccoy says: st december at : am i am curious what pair are the arbitrage bots arbitraging? they end up holding tether i presume after selling into the pump, what then? reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. falvey memorial library blog falvey memorial library blog the collection of blogs published by falvey memorial library, villanova university bitcoin myths: immutability, decentralisation, and the cult of “ million” – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu bitcoin myths: immutability, decentralisation, and the cult of “ million” th june th july - by david gerard - comments. the bitcoin blockchain is famously promoted as “immutable.” is the structure of bitcoin itself immutable? is the million btc limit out of the control of any individual? is bitcoin a decentralised entity of its own, the essence of its operating parameters unalterable by mere fallible humans? is bitcoin truly trustless? well, no, obviously. even though bitcoiners literally argue all the above — most notably saifedean ammous in the bitcoin standard, and andreas antonopoulos in his books on bitcoin — and these ideas are standard in the subculture. decentralisation was always a phantom. at most it’s a way to say “can’t sue me, bro.” every process in bitcoin tends to centralisation — because bitcoin runs on economic incentives, and centralised systems are more economically efficient. trustlessness is also a phantom. bitcoin had to create an entire infrastructure of trusted entities to operate in the world. something called “bitcoin” will be around for decades. all you need is the software, the blockchain data, and two or more enthusiasts. but bitcoin’s particular mythology and operating parameters are entirely separate questions. bitcoin’s basic operating parameters are unlikely to change in the near future — but this is entirely based on trust in the humans who run it. their actions are based in whether changes risk spooking the suckers with the precious actual-dollars. the bitcoin cash debacle destroyed any hope of substantive change to bitcoin for a while, leaving bitcoin as just a speculative trading commodity with nothing else going for it. but if a new narrative is needed, all bets are off. social convention is entirely normal, and how everything else works — but it’s not the promise of immutable salvation through code. that was always delusion at best.   image by mike in space   how bitcoin is marketed bitcoin is not about the technology. it’s never been about the technology. bitcoin is about the psychology of getting rich for free. people will say and do anything if you tell them they can get rich for free. you don’t even have to deliver. bitcoin also has an elaborate political mythology — which is largely delusional and literally based in conspiracy theories. the marketing pitch is that the actual-money economy will surely collapse any moment now! and if you get into bitcoin, you can get rich from this. if you want to get rich for free, take on this weird ideology. don’t worry if you don’t understand the ideology yet — just keep doing the things, and you’ll get rich for free! that the mythology is so clearly at odds with reality is a feature, not a bug — it just proves the world’s out to get you, and you need to stick with the tribe. so the key ingredient in bitcoin is mythology. and job number one is: don’t spook the suckers. also, say “ million” a lot. it’s a mantra to remind the believers to keep the faith. the centralisation of mining when satoshi nakamoto designed bitcoin, he wanted the process of generating new bitcoins to be distributed. he distributed coins to whoever processed a block of transactions. (these blocks were chained together to form the ledger — which was called the “block chain” in the original bitcoin source code.) just giving away bitcoins to anyone who asked wouldn’t work because of the sybil problem — where you couldn’t tell if a thousand people asking for coins were really just one guy with a thousand sockpuppets. satoshi’s way around this was to require some form of unfakeable commitment before you’d be allowed to validate the transactions, and win the coins. he came up with using an old idea called “proof-of-work” — which is really proof of waste. you waste electricity to show your commitment, and the competitors win bitcoins in proportion to how much electricity they waste. this is called “bitcoin mining,” in an analogy to gold mining. miners guess numbers and calculate a hash as fast as they can; if their guess hashes to a small enough number, they win the bitcoins! satoshi envisioned widely distributed bitcoin mining — “proof-of-work is essentially one-cpu-one-vote.” [bitcoin white paper, pdf] the problem here is that mining has economies of scale. the bigger a mining operation you have, the more you can optimise the process, and calculate more hashes with each watt-hour of energy. this means that proof-of-work mining naturally centralises. and this is what we see happening in practice. for the first year, satoshi personally did a lot of the mining — accumulating a stash of around a million bitcoins, that he never moved. more individual cpu users joined in over and ; but by late , people started mining on video cards, which could calculate hashes much faster — to satoshi’s shock. [thread] when you have a tiny share of all the mining, rewards are sporadic — you might not see a bitcoin for months. the solution was to join together in mining pools, who would share the rewards. the first was slushpool in late . the deepbit pool ran from to ; at its peak, it controlled % of mining. by late , application-specific ics (asics) that did nothing but mine bitcoins as fast as possible were being deployed; you couldn’t compete without using asics. the most successful early asic manufacturer was bitmain — who also controlled a large mining pool. the doomsday scenario in early bitcoin was a “ % attack” — if you had % of mining, you could block anyone else’s transactions and accept only those you wanted, and the bitcoin blockchain would read the way you wanted it to. if anyone achieved %, it was game over! ghash.io achieved % in july . [guardian, ] the ghash.io pool promptly split apart, to calm the upset bitcoin fan base. nobody spoke of the % problem in bitcoin again. (though altcoins have frequently suffered % attacks.) even ammous talked about % attacks in the bitcoin standard and somehow forgot to mention that this had already happened. but from on, bitcoin mining was indisputably centralised. in , the men controlling % of bitcoin mining stood on stage together at a conference. three or four entities have run bitcoin mining since then.  the only thing preventing miner misbehaviour is wanting to avoid spooking the suckers — it’s completely trust-based. bitcoin now uses a country’s worth of electricity [digiconomist; cbeci] for no actual reason. you could do the transactions on a iphone. controlling the mining chips also controls mining. you can’t afford to piss off bitmain — as sia found out, when they couldn’t get chips for their minor altcoin fabricated in china because bitmain didn’t like it. the centralisation of development bitcoin’s operating parameters get tweaked all the time. there’s even a standard process for suggesting improvements. peter ryan’s essay “bitcoin’s third rail: the code is controlled” details how the bitcoin development process works in practice: you submit changes to a core group, who then decide whether this is going in. the core group then decides what goes in and how it will work. [ryan research] the thing is, this is completely normal. this is how real open source projects work. the essay presents this basic reality in a straightforward fashion; i’d take issue only with the headline, which paints this as in any way surprising or shocking. it really isn’t. there are multiple bitcoin wallets — the official bitcoin.org wallet and many, many others. so the wallet software is nicely varied. you are, of course, trusting the developers — because approximately zero bitcoin users are capable of auditing the code, let alone doing so. hardware wallets such as ledger have also been exploited plenty of times. what if you don’t like what the developers do? the fundamental promise of free software, or open source software, is that you have the freedom to change the code and do your own version, and the original developers can’t stop you. if you don’t like the official version of linux, you can go off and do your own. in the world of cryptocurrency, this means you can take the bitcoin code, tweak a few numbers, and start your own altcoin. and thousands did. what this doesn’t get you is control of the existing network that runs a crypto-token that’s called “btc” and sold for a lot of actual money on crypto exchanges. that’s the prize. the centralisation of trading crypto exchanges also benefit from economies of scale — there’s more liquidity and volume on the biggest exchanges. however, there’s now a reasonable variety of exchanges to choose from — not like , when everyone’s coins were in mt. gox. you can even pick whether you want a pro-regulation exchange like gemini, or a free-for-all offshore casino! exchanges collectively hold one important power: what they trade as the token with the ticker symbol “btc.” as we’ll see later, this power needs consideration. crypto remains utterly dependent on the us dollar. bitcoin maxis who profess to hate dollars will never shut up about the dollar price of their holding. (unless number goes down — then they’re suddenly into bitcoin for the technology.) the point of crypto trading is to cash out at some point — and actual dollars have regulation. fincen are interested in what you do with actual money. dollars pretty much always pass through the new york banking system, which creates a number of issues for crypto companies. a few exchanges let you get actual dollars in and out. many more — including some of the most popular — are too dodgy to get proper banking. these use tethers, a stablecoin supposedly worth one dollar. crypto trading solved the tawdry nuisance of dollars being regulated by using tethers instead, and cashing out your bitcoin winnings through coinbase or bitstamp as gateway exchanges. if you can believe the numbers reported by the tether exchanges — which, to be fair, you probably can’t without a massive fudge factor — then the overwhelming majority of trading against bitcoin, or indeed any other crypto, is in tethers. this becomes a systemic issue for the crypto markets when tether’s backing turns out to be ludicrously questionable. the crypto market pumpers seem to think tether is in trouble, and now the usdc stablecoin is issuing dollar-equivalent tokens at a rate of billions a month. usdc’s accountant attestations recently changed from saying that every usdc is backed by dollars in a bank account to saying that they may also be backed by “approved investments” — though not what those investments might be, or what the proportion is. [march attestation, pdf] i’m sure it’ll be fine. scaling bitcoin bitcoin doesn’t work at scale, and can’t work — because they took satoshi’s paper-and-string proof of concept and pressed it into production. this approach never goes well. but insisting the fatal flaws are actually features has pacified the suckers so far. bitcoin is not very fast. it can process a theoretical maximum of about to transactions per second (tps) — total, world-wide, across the whole network. in practice, it’s usually around to tps. for comparison, visa claims up to , tps. [visa, pdf] in mid- , the bitcoin network finally filled its tiny transaction capacity. transactions became slow, expensive and clogged. by october , bitcoin regularly had around , unconfirmed transactions waiting, and in may it peaked at , stuck in the queue. [ft, , free with login] nobody could agree how to fix this, and everyone involved despised each other. the possible solutions were: increase the block size. this would increase centralisation even further. (though that ship really sailed in .) the lightning network: bolt on a completely different non-bitcoin network, and do all the real transactions there. this only had the minor problem that the lightning network’s design couldn’t possibly fix the problem. do nothing. leave the payment markets to use a different cryptocurrency that hasn’t clogged yet. (payment markets, such as the darknets, ended up moving to other coins that worked better.) bitcoin mostly chose option — though is talked up, just as if saying “but, the lighting network!” solves the transaction clog. but, the lightning network! the lightning network was proposed in february as a solution to the clog that everyone could see was coming. it’s really clearly an idea someone made up off the top of their heads — but it was immediately seized upon as a possible solution, because nobody had any better ideas. lightning doesn’t work as advertised, and can’t work. this is not a matter of a buggy or incomplete implementation — this is a matter of the blitheringly incompetent original design: prepaid channels, and a mesh network that would literally require new mathematics to implement. users have to set up a pre-funded channel for lightning transactions, by doing an expensive transaction on the bitcoin blockchain. this contains all the money you think you’ll ever spend in the channel. this is ridiculously impractical. you’re not going to send money back-and-forth with a random coffee shop — you want to give them money and get your coffee. lightning’s promise of thousands of transactions per second can obviously only work if you have large centralised entities who almost everyone opens channels with. you could call them “banks,” or “money transmitters.” lightning originally proposed a mesh network — you send money from a to b via c, d and e, who all have their own funded channels set up. but routing transactions across a mesh network, from arbitrary point a to arbitrary point b, without a centralised map or directory, is an unsolved problem in computer science. this is even before the added complication of the network’s liquidity changing with every transaction. this basic design parameter of lightning requires a solution nobody knows how to code. again, this drives lightning toward central entities as the only way to get your transactions across the network. there are other technical issues in the fundamental design of lightning, which make it a great place to lose your money to bugs, errors or happenstance. [reddit] frances coppola has written several essays on lightning’s glaring design failures as a payment system. [forbes, ; blog post, ; blog post, ; coindesk, ] lightning is an incompetent banking system, full of credit risks, set up by people who had no idea that’s what they were designing, and that they’re still in denial that that’s what lightning is. the only purpose lightning serves is as an excuse for bitcoin’s miserable failure to scale, with the promise it’ll be great in eighteen months. lightning has been promising that it’s eighteen months from greatness since . a good worked example of lightning as an all-purpose excuse can be seen in the present version of el salvador’s nascent bitcoin system. strike loudly proclaims it uses lightning — but their own faq says they don’t pass transactions from the rest of the network. bitcoin beach uses lightning — but reports indicate it doesn’t reliably pass money back out again, except via the slow and expensive bitcoin blockchain. with sufficient thrust, pigs fly just fine. [rfc ] the lightning fans strap a rocket to porky and then try to sell you on his graceful aerobatics. the bitcoin cash split so why not just make the bitcoin blocks bigger? if bitcoin sucks, we can fix it, right? bitcoin cash, in , was the last time there was a serious attempt to fix bitcoin’s operating parameters. bitcoin developers could see the blocks filling in early . some proposed a simple fix: raise the size of a block of transactions from megabyte to or megabytes. but the bitcoin community was sufficiently dysfunctional that even this simple proposal led to community schisms, code forks, retributive ddos attacks, death threats, a split between the chinese miners and the american core programmers … and plenty of other clear evidence that this and other problems in the bitcoin protocol could never be fixed by a consensus process. “trustlessness” just ends up attracting people who can’t be trusted. [new york times] this didn’t make the problems go away. finally, in late , large holder roger ver, in concert with large mining pool and mining hardware manufacturer bitmain, promoted bitcoin cash, with large blocks, as the replacement for the deprecated bitcoin software. this wasn’t just starting a fresh altcoin — it was a fork of the blockchain itself. so everyone who had a large bitcoin holding suddenly had the same holding of bitcoin cash as well. free money! the bitcoin cash split was a decentralised judgement of solomon — wherein an exasperated solomon says “all right, we’ll just cut the baby in two then,” and the mothers think that’s a great idea, and start fighting bitterly over whether to slice the kid horizontally or vertically. bitcoin cash launched in september , and wanted to take over the “btc” ticker on exchanges. this failed — it had to go with bch. but bitcon cash did have a chance at the “btc” ticker for a while there; exchanges were watching to see if bitcoin cash became more popular. bitmain mined bch furiously instead of mining btc — and btc’s block times went from ten minutes to over an hour. as it happened, bitcoin was in the middle of a bubble — so nobody much noticed or cared, because all the number-go-up action was happening on exchanges, and not on chain. ver owned bitcoin.com, and furiously promoted bch as a bitcoin that you could use for cash transactions. this was a worthy ambition — but nobody much cared. mostly, the few retail users of bitcoin would get confused between the two, and end up sending money to an address on the wrong chain. bitcoin cash completely failed to gain traction as a retail crypto. most glaringly, it failed to get any takeup in the darknet markets — the first real use case for bitcoin, and the people having the most practical trouble with bitcoin’s transaction clog. the btc version of bitcoin also lost all traction as a retail crypto — the average transaction fee peaked at around $ in december . it was around this time that bitcoin advocates stopped trying to pretend that bitcoin would ever work as currency. they went in hard on the “digital gold” narrative, and pretended this had always been the intention — and never mind that the bitcoin white paper is literally titled “bitcoin: a peer-to-peer electronic cash system.” there were other completely ridiculous fights to the death for insanely low stakes around this time — segwit (which was eventually adopted, making blocks slightly larger), segwit x (which wasn’t), and uasf (where non-miners thought they could sabotage protocol changes they didn’t like). these were different in technical terms, but not different in bitterness or stupidity. none of the disputes were really technical — it was all the politics of who got to make money. everyone involved hated everyone else, and characterised their opponents as working in bad faith to sabotage bitcoin. they figured that if they shouted enough abuse at each other, they’d get rich faster, or something. bitcoin cash fell flat on its face. bitmain ended up with million bch “inventory” — that is, a pile of coins there was no market for — and fired its entire bch team in late . [ccn] bch continues as just another altcoin with hopes and dreams — and not as any serious prospect to take over from btc. the main thing keeping it alive is that it’s listed on coinbase. also, you can still send bcashers into conniptions just by calling bitcoin cash “bcash” — the bcashers decided that calling bch-coin “bcash” was a slur. though their real objection was that this term leaves out the word “bitcoin.” it’s unfortunate that bcash’s graphic designer never told them about the effects of putting a transparent logo png on a white background, so it looks like it says “b cash.” [bitcoincash.org, archive] jonathan bier has written a book about this slice of bitcoin history: the blocksize war (us, uk). peter ryan has reviewed the book; he thinks the chronology is correct, but the presentation is horribly slanted. [twitter] i concur, and it’s glaring right from the intro — it’s a book-length blog rant about people who really pissed bier off, and it’s incomprehensible if you weren’t following all of this at the time. it’s on kindle unlimited.   this is why you get a proper graphic artist, and pay them.   but the blockchain’s immutable, right? the blockchain is probably immutable — there’s no real way to change past entries without redoing all the hash-guessing that got to the present. but the blockchain is data interpreted by software. in , ethereum suffered the collapse of the dao — a decentralised organisation, deliberately restricted from human interference, running on “the steadfast iron will of unstoppable code.” (bold in original.) the dao was hacked — taking % of all ether at the time with it. how did the ethereum developers get around this? they changed how the software interpreted the data! immutability lasted precisely until the big boys were in danger of losing money. there’s already legal rumblings in this direction — craig wright, the man who has previously failed to be satoshi nakamoto, has sent legal letters to bitcoin developers demanding that they aid him in recovering , bitcoins that he doesn’t have the keys for. [reuters] has history ended, then? there are people who would quite like the million bitcoin limit to change: the miners. the halving dropped the issuance of bitcoins from . btc per block to . btc. in , that’ll drop again, to . btc. the question is power — miners have a lot of power in the bitcoin system. that power is shaky at present, because so much mining just got kicked out of china. can they swing a change to bitcoin issuance? the bit where proof-of-work mining uses a country’s worth of electricity to run the most inefficient payment system in human history is finally coming to public attention, and is probably bitcoin’s biggest public relations problem. normal people think of bitcoin as this dumb nerd money that nerds rip each other off with — but when they hear about proof-of-work, they get angry. externalities turn out to matter. ethereum is the other big crypto that’s relatively convertible to and from actual money. if ethereum can pull off a move away from proof-of-work, that will create tremendous pressure on bitcoin to change, or be hobbled politically and risk the all-important interfaces to actual money. (that said, ethereum just put off the change from proof-of-work to … about eighteen months away, where it’s been since . ah well.) bitcoin mythology has changed before, and it’ll change again. “ million” will be broken — if number-go-up requires it. the overriding consideration for any change to bitcoin is: it must not risk shaking the faith of the most dedicated suckers. they supply the scarce actual-dollars that keep the casino going.   this article was a commission: to write about “the fallacy that bitcoin is immutable, when apparently a vote can increase the m cap or do anything else.” if you have a question you really want in-depth coverage of, and that i haven’t got around to doing a deep dive on — i write for money! your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedbitcoinbitcoin cashbitcoin corebookcraig wrightethereumghash.iojonathan bierlightning networkminingmt goxpeter ryansatoshi nakamototetherthe blocksize warthe daotradingusdcvisa post navigation previous article news: stopping ransomware, china hates miners, ecuador cbdc history, nfts still too hard to buy next article el salvador: chivo wallet, strike speaks, users test the bitcoin system so far comments on “bitcoin myths: immutability, decentralisation, and the cult of “ million”” paul j says: th june at : pm just to note that in the linked article – https://www.ryanresearch.co/post/bitcoin-s-third-rail-the-code-is-controlled – he refers to bip without realizing it was an april fool’s joke (that satoshi failed to set a limit, so we will in this bip in ). that has confused me a little. is the million number in the original code or added later and if so do you know when? reply brandon says: th june at : am the mil limit was original. it’s a natural consequence of the reward decay rules though. this makes it an indirect limit which is why the april fools joke is technically not a joke, there really is no coded upper limit. reply des says: th june at : am bip- was not a joke, just tongue-in-cheek. it fixed a bug that would have caused the subsidy (aka block reward) to roll back around to btc after halving intervals or ~ years (~ ). and to answer your question, the issuance limit is the sum of subsidies up to the point where the block reward goes to (after halvings or around ~ ), specifically , , , , , sat or , , . btc. the curve is exponential, so % of that will already have been mined by the th halving in ~ . this can be changed but i find it highly unlikely; if btc is still a thing by the whales are unlikely to agree to a change that will immediately devalue their hodlings, reply mark rimer says: th june at : am great review on butt-history, and especially the lightning network, david! it’s the definition of, “ that’s so stupid; you must be explaining it wrong!” goes right up there with the log-scale graphs that lie about data right on same graph! reply david gerard says: th june at : am i had this argument with someone on twitter about lightning this week – they were convinced it was just a matter of further development, and not a matter of a design that couldn’t work unless they discovered new mathematics. reply brandon says: th june at : am another big myth is that bitcoin has no monetary authority. it actually does but it is a democratic authority. all of the things that a central planner can do are possible in bitcoin as well. bitcoin wallets can be blacklisted/blocked from transacting, the coin cap can be changed, taxes/fees can be introduced, pretty much anything that a bank or government can do to a us dollar account can be done to a bitcoin wallet by the developers, nodes, and miners. reply david gerard says: th june at : am not so much democratic as plutocratic. reply blake says: th june at : pm exactly, bitcoin banks and financial services will act like all other banks and financial services, in fact, they will probably be bought by the existing banks, if successful. reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. background: "toward a national archival finding aid network" planning initiative ( - ) - building a national finding aid network - confluence skip to content skip to breadcrumbs skip to header menu skip to action menu skip to quick search linked applicationsloading… confluence spaces hit enter to search help online help keyboard shortcuts feed builder what’s new available gadgets about confluence log in building a national finding aid network page tree browse pages configurespace tools attachments ( ) page history people who can view page information resolved comments view in hierarchy view source export to pdf export to word pages building a national finding aid network about the project skip to end of banner jira links go to start of banner background: "toward a national archival finding aid network" planning initiative ( - ) skip to end of metadata created by adrian turner, last modified on feb , go to start of metadata summary with crucial funding support from the us institute of museum and library services (imls) under the provisions of the library services and technology act (lsta), administered in california by the state librarian, the cdl coordinated a -year collaborative planning initiative (october – september ) with the following key objectives: identify key challenges facing finding aid aggregators. uncover and validate high-level stakeholder (archivists, researchers, etc.) requirements and needs for finding aid aggregations. explore the possibilities of shared infrastructure and services among current finding aid aggregators, to test the theory that collaboration will benefit our organizations, contributors, and end users. if so, identify potential shared infrastructure and service models. determine if there is collective interest and capacity to collaborate on developing shared infrastructure. develop a concrete action plan for next steps based on the shared needs, interests and available resources within the community of finding aid aggregators. discussions of viable collaboration models and sustainability strategies will be included. developing a collective understanding of requirements and challenges was a necessary first step for establishing the trajectory of any future finding aid aggregation effort. the planning initiative produced the following key deliverables. key deliverables see project reports and resources partners and roles group role member resources core partners this group comprised representatives from state and regional archival description aggregators. members did not represent their institution but rather leveraged their background and experience within the field to inform the group's work. expectations for core partners: identify one or more individuals who can provide complete, objective, and correct information on the past and/or present program, for incorporation into a profile of the current us archival description aggregator landscape review the profile and findings, in advance of the symposium; prepare for and attend the symposium; serve on one or more working groups after the symposium and participate actively in the work of those groups; participate actively in formulating and vetting the action plan that results from this work. roster advisory partners this group comprised representatives from state and regional projects or programs that provide some form of support for finding aid and archival description aggregation, but who did not have capacity at this time to be core partners. this group also included those entities no longer providing services, as well as those planning to do so in the future. expectations for advisory partners: identify one or more individuals who can provide complete, objective, and correct information on the past and/or present program, for incorporation into a profile of the current us archival description aggregator landscape. review and optionally provide feedback on the action plan that results from this project. roster expert advisors advisors were invited to the project symposium to advise, inspire, and contribute to the discussions in a variety of key areas. this group included representatives from organizations that provide services that are part of the archival description ecosystem, as well as organizational development, community engagement, and sustainability. expectations: review the profile of the current us archival description aggregator landscape, in advance of the symposium; prepare for and attend the symposium in spring . constructively comment on the action plan that results from this project. roster project team jodi allison-bunnell (ab consulting): jodi’s responsibilities included: providing leadership and research analyst support services for surveying and establishing a profile of finding aid aggregators and related organizations; facilitating stakeholder discussions; and assisting in the creation of a final report and recommendations. adrian turner (california digital library, senior product manager): adrian’s responsibilities included: coordinating project activities; supporting cdl's administration of the lsta-funded project; and serving as a core partner, drawing from his experience with cdl's online archive of california (oac) service. grant proposal lsta ( - ) project proposal - no labels overview content tools activity contact us | privacy policy | accessibility powered by atlassian confluence . . printed by atlassian confluence . . report a bug atlassian news atlassian {"serverduration": , "requestcorrelationid": " fa d f "} the duchess of duke street - wikipedia the duchess of duke street from wikipedia, the free encyclopedia jump to navigation jump to search television series this article needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. find sources: "the duchess of duke street" – news · newspapers · books · scholar · jstor (december ) (learn how and when to remove this template message) the duchess of duke street genre drama created by john hawkesworth starring gemma jones christopher cazenove victoria plucknett john cater john welsh richard vernon theme music composer alexander faris country of origin united kingdom no. of series no. of episodes production running time minutes release original network bbc original release september  ( - - ) – december  ( - - ) the duchess of duke street is a bbc television drama series set in london between and . it was created by john hawkesworth, previously the producer of the itv period drama upstairs, downstairs.[ ] it starred gemma jones as louisa leyton trotter, the eponymous "duchess" who works her way up from servant to renowned cook to proprietrix of the upper-class bentinck hotel in duke street, st. james's, in london.[ ] the story is loosely based on the real-life career of rosa lewis (née ovenden), the "duchess of jermyn street", who ran the cavendish hotel in london, which still stands, at the corner of duke st, st. james’s'.[ ] when the show first aired, there were many people who still remembered her, as she lived until .[ ] according to census returns, she was born in leyton, essex, to a watchmaker. in the series, louisa's family name is leyton, and her father is a clockmaker. the programme lasted for two series totalling episodes, shown in and . shown later on pbs in the united states, it was nominated for an emmy award for outstanding limited series in .[ ] the theme music was composed by alexander faris.[ ] contents plot summary cast episodes . series . series references external links plot summary[edit] beautiful but low-born louisa leyton (gemma jones) has one driving ambition: to become a great cook. she finds employment as a cook in the household of lord henry norton (bryan coleman). his handsome, wealthy, aristocratic nephew, charlie tyrrell (christopher cazenove), attempts to seduce her, but she rebuffs him, refusing to be sidetracked from her ambition to become the best cook in london. louisa manages to convince lord norton's sexist french chef, monsieur alex (george pravda), into accepting her as his apprentice. the main characters (from left to right): charles tyrrell, louisa trotter, major smith-barton, merriman, starr, mary. when louisa is unexpectedly called upon to prepare a dinner by herself, she catches the eye of one of the guests, edward, the prince of wales (roger hammond), who admires both her cooking and her appearance. after the dinner, louisa is pressured into becoming edward's mistress. against her own wishes, she agrees to marry lord norton's head butler, augustus 'gus' trotter (donald burton), to maintain the appearance of respectability and to protect the royal reputation. gus and louisa are given a house, and her involvement with the prince commences. in time, edward's mother, queen victoria, dies leaving edward to assume the throne as king edward vii and causing him to end his relationship with louisa. louisa's shaky marriage to gus becomes strained, both from her affair with the prince and her great success as a chef. in an effort to help him recover his pride, louisa purchases the bentinck hotel and talks a reluctant gus into managing it. before long, abetted by his sister, he lets the authority go to his head. his arrogance alienates the staff and, more importantly, the guests. once louisa discovers that he has lavishly entertained his friends and driven away the guests, she throws both him and his meddling sister out. then she discovers, to her horror, the mountain of bills he has left unpaid. with only mary, one of lord norton's servants, to assist her, she sets to work to pay the debts, taking any and all cooking jobs, however humble, but finally she collapses, exhausted from overwork, in the street very early one morning. charlie tyrrell is passing by (leaving a late-night assignation) and takes her back to the bentinck. once he learns of louisa's financial woes, he convinces her to allow him to help her to the extent that he becomes a silent partner in the hotel. louisa keeps one of the bentinck's previous employees, the elderly head waiter merriman (john welsh). she hires the brisk, soldierly starr (john cater), who is always accompanied by his dog fred, as the porter. from their former employer, louisa takes along her loyal welsh assistant and friend mary (victoria plucknett). (in the final episode, starr and mary get engaged.) rounding out the principal cast is major toby smith-barton (richard vernon), an upper-class, retired army officer. the major enjoys wagering on the horse races and ends up unable to pay his hotel bill. reluctant to "toss him out on the street" and liking the man, louisa offers the major a position: general adviser, bellhop and greeter. charlie and louisa eventually have a very passionate romance. infatuated with charlie, louisa begins to neglect both the hotel and her cooking. recognizing what is happening, the major steps in and has a discreet word with charles. knowing how much the establishment means to louisa, charlie leaves for an extended stay in america, giving louisa a chance to refocus on her business. grief-stricken at first, louisa eventually regains her balance and makes the bentinck a great success, only to discover that she is pregnant. eventually, louisa secretly gives birth to their illegitimate daughter lottie (lalla ward). louisa accepts charlie's suggestion that lottie be discreetly adopted by a young couple who work on his estate. later, charlie and louisa agree it is best they remain friends, not lovers. upon the death of his father, charlie inherits the family fortune and the title of lord haslemere. with louisa's approval, charlie marries another woman. he tells louisa that if his marriage has any hope of working, he will have to be away from her. however, when charlie's wife later dies, he and louisa renew their relationship. they decide to postpone their wedding until the end of the first world war. tragically, charlie dies of a head injury received while fighting in the trenches. louisa is grief-stricken, but gradually recovers. louisa informs the teenage lottie the identity of her true parents. lottie accepts her mother's offer to take her to london. louisa, not quite knowing what to do with her, eventually sends her to a swiss finishing school to become a lady. when lottie returns, she has her heart set on being a singer instead. louisa's parents occasionally make an appearance. she is on very good terms with her ineffectual, but loving father (john rapley), but not with her critical, abrasively selfish mother (june brown). late in the series, louisa's father dies, but not before giving his modest savings to his granddaughter to help her pursue her singing career. louisa becomes reconciled to lottie's career choice. cast[edit] gemma jones as louisa trotter (née leyton) victoria plucknett as mary john welsh as merriman john cater as starr richard vernon as major smith-barton christopher cazenove as charles "charlie" tyrrell, later lord haslemere mary healey as mrs. cochrane, louisa's head cook at the bentinck doreen mantle as mrs catchpole, lord henry's housekeeper sammie winmill as ethel, a maid at the bentinck holly de jong as violet, another maid donald burton as augustus trotter june brown as mrs. violet leyton john rapley as mr. ernest leyton lalla ward as lottie, louisa's daughter. (ward is only eight years and six months younger than gemma jones. philippa shackleton played lottie as a child in one episode.) bryan coleman as lord henry norton, louisa's employer for part of the first series, beginning in the first episode christine pollon as aunt gwyneth, mary's aunt and occasional seamstress at the bentinck george pravda as monsieur alex roger hammond as the prince of wales, later king edward vii martin shaw as arthur, louisa's brother joanna david as lady margaret hazlemere episodes[edit] series [edit] no. in series no. in season title directed by written by original air date "a present sovereign" bill bain john hawkesworth  september   ( - - ) wishing to learn to be an excellent cook, outspoken and ambitious louisa leyton is hired as assistant to monsieur alex, a french chef working in the london home of lord henry norton. resented by other kitchen staff, louisa bonds with mary, the target of their frequent malice, forms a friendship with head butler gus trotter, and fends off the attentions of the aristocratic charlie tyrrell. when lord henry unexpectedly returns during monsieur alex's holiday, she must cook dinner for a group that includes the prince of wales. "honour and obey" cyril coke john hawkesworth and jeremy paul  september   ( - - ) when the prince of wales asks to borrow louisa's services as a cook, it turns out that more than her cooking appeals to him. to preserve appearances, she is pressured into marrying gus trotter, who has fallen in love with her, even though she does not return his feelings. the newly married couple leave the nortons' service for the house provided for them. "a nice class of premises" bill bain david butler and john hawkesworth  september   ( - - ) the death of queen victoria ends louisa's relationship with the prince, now king edward vii, forcing the trotters to face their future together. under the influence of his resentful sister, nora, gus wants to take in lodgers to make ends meet. instead, over his objections, louisa sets out to rebuild her career as a chef, taking on mary as her assistant. when gus's drinking begins leading him into indiscretion, louisa buys the bentinck hotel in duke street and installs him as manager, with disastrous results. "the bargain" cyril coke john hawkesworth and jack rosenthal  september   ( - - ) after throwing gus and nora out of the hotel, louisa sets to work to repay the debts they have allowed to accumulate with the assistance of mary and the hotel's long-serving, aged head waiter, merriman. when louisa collapses from overwork, charlie tyrrell steps in and becomes a silent investor in the hotel in return for a permanent private suite. "a bed of roses" bill bain john hawkesworth  october   ( - - ) equipped with hall porter starr and his dog, fred, the newly refurbished bentinck reopens and, under louisa's management, becomes a success as long-term guests such as the major take up residence until her developing friendship with charlie begins to put the bentinck at risk. when charlie leaves for america to see his dying father, louisa regains her focus on the business - only to discover she is pregnant. leaving mary in charge, she goes into hiding to have the baby, a girl she names charlotte. charlie, now lord haslemere, finds the baby a place with a childless couple on his estate. "for love or money" raymond menmuir john hawkesworth  october   ( - - ) a visiting german baron claiming to be a friend of lord haslemere's creates trouble at the bentinck with some of the other guests. "a lady of virtue" cyril coke john hawkesworth and jeremy paul  october   ( - - ) when the liberals celebrate a by-election result at the bentinck, a politically ambitious mp at the party sets out to seduce a visiting artist believed to be incorruptibly devoted to her husband. "trouble and strife" raymond menmuir john hawkesworth and jeremy paul  october   ( - - ) while louisa takes a holiday in france, starr's estranged former common-law wife, lizzie, blackmails him into helping her get a job as laundry maid at the bentinck. starr finally confides in the major after money is found missing from a hotel guest's room. "the outsiders" simon langton john hawkesworth and rosemary anne sisson  october   ( - - ) charlie pushes louisa to accept as a hotel guest an obviously mismatched unassuming man. seeing that charlie is at loose ends, louisa arranges for an art dealer to include his paintings in a gallery show, where he discovers the difference between being an amateur and being a genuine talent. "lottie's boy" cyril coke john hawkesworth and julia jones  november   ( - - ) mary's disapproving aunt gwyneth, the servants' ball, and attentive guest marcus carrington sow dissent between louisa and mary, whose abrupt departure leaves the hotel in a muddle. louisa and the major give their money to carrington, a risk-taking city whizkid and the son of an old friend of the major's, to invest in the stock market. "no letters, no lawyers" simon langton bill craig and john hawkesworth  november   ( - - ) a discontented departing guest's refusal to pay his bill lands him in court and the hotel in the newspapers, to the displeasure of both louisa and his newspaper publisher uncle, who sends a journalist to work undercover at the hotel. the publicity gives the bentinck's owners ammunition for their demand to renegotiate louisa's lease now that the hotel is successful. with the hotel at risk, louisa is forced to sue the paper for libel, pushing the journalist to unearth the details of her earlier life. "a matter of honour" bill bain julian bond and john hawkesworth  november   ( - - ) insufficient staff and a row over a racing result nearly derail a special dinner that the king's newly married former equerry asks louisa to cook at his ascot home for a group of guests that includes charlie haslemere. "one night's grace" cyril coke john hawkesworth and ken taylor  november   ( - - ) a mysterious young woman causes dissent between charlie and louisa, speculation among the staff, and a visit from the police when she comes to stay with charlie for a few days. "plain sailing" raymond menmuir john hawkesworth and jeremy paul  december   ( - - ) charlie is caught in the middle when louisa buys a house at cowes to move her entire operation to the seaside for a few weeks' summer vacation, upsetting the stuffy sailing club next door. among louisa's guests is the professional dancer irene baker, who develops a close friendship with charlie. "a test of love" bill bain john hawkesworth  december   ( - - ) the funeral of king edward vii provides the occasion for a visit from louisa's former employer, lord henry norton. as charlie haslemere's sole surviving relative, he tells louisa it's time for charlie to marry - and he thinks he's found the right woman. one snag: as soon as charlie's engagement to margaret wormold is announced, irene baker sues him for breach of promise of marriage. in the ensuing court case, louisa and mertiman are required to testify. series [edit] no. in series no. in season title directed by written by original air date "family matters" bill bain julia jones  september   ( - - ) when louisa's brother returns from his travels after a long time away, his doting mother talks her reluctant daughter into hiring him, as the hotel is short-staffed. arthur soon antagonises the longtime servants, precipitating an ugly family quarrel after louisa discharges him. "poor catullus" cyril coke jeremy paul  september   ( - - ) two young men play a prank, sending louisa love letters purportedly from their oxford university professor. louisa and professor stubbs soon turn the tables on the tricksters, but she is tempted by his somewhat inebriated offer to take her to america to start a new life. the next morning, however, he remembers nothing of the previous night. charlie's wife, margaret, seeks louisa's help in searching for a house in london. "a lesson in manners" cyril coke rosemary anne sisson  september   ( - - ) when an elderly but vibrant bentinck regular guest dies unexpectedly, she surprises everyone by leaving nearly all of her considerable wealth to her attentive chauffeur prince rather than her indifferent, spendthrift nephew eddie sturgess. when prince considers entering british society, louisa decides it would be fun to pass him off as a gentleman, under eddie and the major's tutelage, despite the latter's warning. the major is proved right in the end. "winter lament" simon langton maggie wadey  september   ( - - ) louisa visits the haslemeres at their country estate and finds that margaret's strange behaviour is putting a great strain on the marriage. louisa also sees her daughter lottie for the first time since giving her up as a baby. louisa tries her best to help margaret, but in the end, the disturbed, emaciated woman wanders outside one night and is found dead the next morning. "the passing show" bill bain john hawkesworth  october   ( - - ) louisa rouses charlie out of his depression after margaret's death. meanwhile, sir martin mallory, a famous but aging actor, seduces violet. when louisa finds out, she sacks her chambermaid without a reference, forcing the poor girl to try to take up streetwalking. an understanding police inspector persuades louisa to relent a little and provide a reference and assistance finding more respectable employment. "your country needs you" simon langton john hawkesworth  october   ( - - ) when great britain enters the first world war, louisa is ultra-patriotic, until charlie joins the coldstream guards. the major returns to active duty. in exchange for getting starr reinstated in the army (it was established in season , episode that when starr was a sergeant in the sudan campaign, he caught his young wife with another soldier, and was imprisoned and dishonourably discharged for his subsequent actions), the major gets louisa to hire gaspard, a belgian refugee. "the patriots" bill bain bill craig and john hawkesworth  october   ( - - ) autumn . louisa is cold to shirkers who avoid military service, particularly bentinck resident mr. appleby. a naval intelligence lieutenant uncovers an espionage ring which is reading the correspondence of high-ranking officers who frequent the bentinck. gaspard commits suicide to avoid arrest. his confederate, american hotel guest brewster, is not so quick. louisa is embarrassed to learn appleby is also a spy, but for the british. "the reluctant warrior" simon langton john hawkesworth and rosemary anne sisson  october   ( - - ) winter . the hotel sustains minor damage from a zeppelin bomb attack. some soldiers are assigned to deal with what may be a buried unexploded bomb very near the bentinck. ethel is attracted to one of them, a conscientious objector ("conshie") despised by everyone else. among louisa's guests are an -year-old and the officer with whom she has eloped. tired of waiting for a demolitions expert and fed up with the sneers aimed at him, the conshie clive digs around and finds there is no bomb. he also discovers the body of fred, starr's dog. before clive leaves, ethel accepts his engagement ring. (in a later episode, ethel is a bereaved widow who just wants her clive back.) "tea and a wad" cyril coke john hawkesworth  october   ( - - ) spring . the hotel is empty as repairs are being made. the major persuades louisa to set up a canteen in boulogne for troops with no place to go while waiting for transportation back to britain. a helpful soldier eventually confesses to mary that he is a deserter. she consults the major, who comes up with a plan to get him out of his predicament. a rival do-gooder creates trouble for louisa. the general in charge turns out to be a bentinck regular. louisa agrees to marry charlie after the war ends. "shadows" bill bain john hawkesworth and jeremy paul  november   ( - - ) summer . louisa opens the bentinck to recuperating soldiers. mary is attracted to brian, one of the convalescents. charlie receives a head wound and returns to the hotel for a rest. the wound takes a turn for the worse, and he begins to go blind. an eminent brain surgeon advises against surgery so soon after the previous operation. charlie suddenly dies while sitting louisa's parlour as she is talking to him. "where there's a will" cyril coke john hawkesworth and julia jones  november   ( - - ) louisa is in dire financial straits, but stubbornly refuses to cash the cheques of the soldiers who stayed at her hotel. grief-stricken over charlie's death, she decides to sell up the bentinck. her father makes her read a letter from charlie, which gives her renewed resolve to go on. with the money he left her in his will, she sets about restoring the hotel to its former glory. "the legion of the living" gerry mill john hawkesworth and maggie wadey  november   ( - - ) louisa returns to the haslemeres' estate for a memorial service, and the truth of lottie's parentage is exposed. louisa offers to take lottie back to london. "lottie" bill bain john hawkesworth and jeremy paul  december   ( - - ) louisa tells mary, starr, and merriman lottie is her daughter. she has the major show the young woman the sights of london, but inevitably, rumours swirl around her. brian returns, embittered by the war and irritated by mary's attempts to help him. he leaves, breaking mary's heart. finally, louisa decides to send her daughter to a swiss finishing school. "blossom time" gerry mill john hawkesworth and jeremy paul  december   ( - - ) lottie returns on a school holiday, bringing with her one of her teachers, miss olive bradford. the major and miss bradford fall in love and become engaged. lottie falls for a handsome young man, but when she incautiously reveals who her parentage, his interest in her vanishes. "poor little rich girl" cyril coke john hawkesworth and julia jones  december   ( - - ) louisa and lottie clash over her future. lottie is determined to become a singer. she meets her grandfather for the first time. he becomes her ally, giving her his meagre life savings before he dies. lottie moves in with her grandmother, who tells louisa she understands lottie, whereas she could never fathom louisa. "ain't we got fun" bill bain john hawkesworth  december   ( - - ) louisa permits an american woman to write her biography, though she insists on final approval of the result. merriman wins a newspaper contest and samples life, even sauntering into the bentinck for a drink, before his money runs out and he returns to his usual post. mary and starr inform louisa that they have become engaged. louisa is furious at first, as she has a policy of not employing married people, but eventually gives in. references[edit] ^ "john hawkesworth - telegraph". the daily telegraph. london: tmg. october . issn  - . oclc  . ^ "bfi screenonline: hawkesworth, john ( - ) biography". www.screenonline.org.uk. ^ "rosa lewis, cavendish hotel london, duchess of duke street, rosa lewis hotel london". thecavendish-london.co.uk. . retrieved may . ^ " reasons to watch the duchess of duke street - the duchess of duke street - drama channel". drama.uktv.co.uk. ^ "the duchess of duke street ii masterpiece theatre | television academy". emmys.com. . retrieved may . ^ "alexander faris, composer of tv theme tunes, obituary". the independent. october . external links[edit] the duchess of duke street at imdb v t e edward vii family alexandra of denmark (wife) albert victor, duke of clarence and avondale (son) george v (son) louise, princess royal (daughter) princess victoria (daughter) maud, queen of norway (daughter) queen victoria (mother) albert, prince consort (father) events royal baccarat scandal coronation honours medal police medal coronation cases state funeral reign edwardian era edwardian architecture prime ministers emperor of india household namesakes king edward vii land king edward vii-class battleship hms king edward vii hospitals london falkland islands sheffield schools king edward vii academy king edward vii and queen mary school king edward vii school, sheffield king edward vii school, melton mowbray king edward vii school, johannesburg king edward vii school, taiping eduardo vii park prince albert king edward vii stakes depictions film and television the coronation of edward vii ( ) "guest of honour" ( ) fall of eagles ( ) jennie: lady randolph churchill ( ) edward the seventh ( ) the duchess of duke street ( ) lillie ( ) mrs brown ( ) the lost prince ( ) victoria & abdul ( ) statues and memorials king edward vii memorial edward vii monument bangalore statue bootle statue books the edwardians ( ) and having writ... ( ) flashman and the tiger ( ) stamps edward vii d tyrian plum mistresses agnes keyser alice keppel daisy greville hortense schneider lady susan vane-tempest lillie langtry nellie clifden patsy cornwallis-west honours royal family order knights grand cross of the royal victorian order knights commander of the royal victorian order related arthur sassoon olga de meyer lady randolph churchill sarah bernhardt homburg hat caesar (dog) king edward vii's town coach retrieved from "https://en.wikipedia.org/w/index.php?title=the_duchess_of_duke_street&oldid= " categories: bbc television dramas television series set in the s television series set in the s television series set in the s british television series debuts british television series endings s british drama television series english-language television shows television shows set in london hidden categories: articles with short description short description matches wikidata use dmy dates from march use british english from august articles needing additional references from december all articles needing additional references navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages galego italiano עברית edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement sophie treadwell - wikipedia sophie treadwell from wikipedia, the free encyclopedia jump to navigation jump to search american playwright this article cites its sources but does not provide page references. you can help to improve it by introducing citations that are more precise. (october ) (learn how and when to remove this template message) sophie anita treadwell (october , – february , ) was an american playwright and journalist of the first half of the th century. she is best known for her play machinal which is often included in drama anthologies as an example of an expressionist or modernist play. treadwell wrote dozens of plays, several novels, as well as serial stories and countless articles that appeared in newspapers. in addition to writing plays for the theatre, treadwell also produced, directed and acted in some of her productions. the styles and subjects of treadwell's writings are vast, but many present women's issues of her time, subjects of current media coverage, or aspects of treadwell's mexican heritage.[ ] sophie treadwell on u.s. auto tour contents heritage and childhood university and early career new york broadway later years plays and novels . plays and novels journalism contemporaries and context resources and further reading references external links heritage and childhood[edit] sophie anita treadwell was born in in stockton, california.[ ] between and , treadwell's father, alfred treadwell, deserted her and her mother, nettie fairchild treadwell, and moved to san francisco.[ ][ ] although treadwell originally excelled at school, after her father left she struggled, which others have attributed to the frequency with which she and her mother relocated.[ ] while treadwell primarily lived with her mother, occasionally treadwell would spend summers in san francisco with her father. during these visits, treadwell was first exposed to theatre; she witnessed famous actresses helena modjeska and sarah bernhardt in the merchant of venice and phèdre, respectively. in , treadwell and her mother moved to san francisco.[ ] although treadwell's father was also born in stockton, ca, he spent most of his formative years in mexico with his native-born mother.[ ] both treadwell's paternal grandmother and great-grandmother were mexican women of spanish descent.[ ] treadwell's father had a catholic education and was fluent in five languages.[ ] treadwell's strong female role model was her grandmother anna gray fairchild, a scottish immigrant, who managed the family's large ranch in stockton after the death of her husband.[ ] traces of treadwell's heritage—both mexican and european can be gleaned from her works, as can references to her parents' troubled marriage and her time spent at the ranch in stockton.[ ] university and early career[edit] treadwell at uc berkeley[ ] treadwell received her bachelor of letters in french from the university of california at berkeley, where she studied from to .[ ][ ] at berkeley, treadwell became very involved with the school's extracurricular drama and journalism activities, serving as the college's correspondent for the san francisco examiner.[ ] due to financial pressure, treadwell had to work several jobs during her studies; receiving additional training in shorthand and typing, teaching english as a second language in the evenings, as well as working in the circulation department of the san francisco call.[ ] it was also during this time that she first began to write; early drafts of shorter plays, songs, and short fictional stories.[ ] during college, treadwell had her first brushes with mental illness, a variety of nervous conditions that would plague her and lead to several extended hospitalizations throughout her life.[ ][ ] after college, treadwell moved to los angeles where she worked for a brief time as a vaudeville singer.[ ] she then studied acting and was mentored by renowned polish actress helena modjeska, whose memoirs she was hired to write in .[ ] in , treadwell married william o. mcgeehan, better known as 'mac', a beloved sports writer for the san francisco bulletin.[ ][ ] new york[edit] in , treadwell moved to new york,[ ] following her husband who had already made the cross-country move for his career.[ ] in new york, treadwell joined the lucy stone league of suffragists.[ ] treadwell participated in a -mile march with the league, which delivered a petition on women's suffrage to the legislature of new york.[ ] treadwell maintained a separate residence from her husband, an idea encouraged by the league.[ ] her marriage was said to be one of mutual independence and acceptance of differing interests.[ ] in new york, treadwell befriended and became associated with many well known modernist personalities and modern artists of the time, notably louise and walter arensberg who ran a new york salon, and painter marcel duchamp. congruous with treadwell's advocacy for sexual independence, birth control rights, and increased sexual freedom for women, treadwell had a brief affair with the artist maynard dixon between and .[ ][ ] treadwell reached the peak of her professional career in journalism and in theatre in new york in the s.[ ] treadwell attended lectures and completed an extensive study with richard boleslavsky of the moscow art theatre which proved to be both influential and motivational for treadwell's varied theatrical pursuits.[ ] treadwell underwent media controversy in the mid- s for a drawn out dispute with the famous john barrymore; barrymore attempted to produce a play about edgar allan poe supposedly written by his wife michael strange, which borrowed heavily from a manuscript that treadwell had written and shared with him years prior. treadwell brought a lawsuit against barrymore for stoppage of the play and won, although she was criticized heavily in the media.[ ] treadwell lectured and advocated openly for authors rights and was the first american playwright to win royalty payments for a play production from the soviet union.[ ] in addition to her accomplishments, treadwell traveled often with 'mac' across the united states, europe, and northern africa. treadwell's husband died in due to heart complications, while they were on vacation in the state of georgia.[ ] broadway[edit] treadwell set herself apart from many female writers of her day, by pursuing commercial productions of her works on broadway. seven of treadwell's plays, listed below, appeared on the great white way between and .[ ] gringo was treadwell's first play to be produced on broadway.[ ] most of these plays were only written by treadwell, but she also produced lone valley and o, nightingale, the later of which she even staged.[ ] new york became the setting for the majority of treadwell's plays.[ ] gringo o nightingale machinal ladies leave lone valley plumes in the dust hope for a harvest critics often negatively judged treadwell's plays as having poorly developed plots, unsympathetic characters, or objectionable themes.[ ] treadwell was also known for having tense relationships with producers because she was reluctant to accept their feedback and edit her work.[ ] later years[edit] in the s and 's treadwell turned to writing mostly fiction in the form of short stories and novels, which may be influenced by the lack of success from her broadway ventures.[ ][ ] treadwell lived for a time as an expatriate in vienna, austria as well as in torremolinos in southern spain.[ ] when sophie returned to the u.s. she lived in newton, connecticut, but also spent time in mexico and stockton. in , treadwell adopted a young german boy, whom she named william. sophie retired in the mid- s to tucson, arizona where she spent her final years.[ ][ ] after a brief hospital stay, treadwell died on february , .[ ] plays and novels[edit] treadwell is credited with writing at least plays,[ ] numerous serials and journalistic articles, short stories, and several novels. the subjects of her writings are as diverse as the mediums she was writing in. many of treadwell's works are difficult to obtain and the majority of her plays have not previously been produced. many of treadwell's plays follow the traditional late nineteenth century well-made play structure, but some share the more modern style and feminist concerns treadwell is known for, including her often anthologized machinal.[ ] although treadwell's plays primarily feature lead female characters, the women presented vary greatly in their behavior, beliefs, and social status.[ ] some of treadwell's plays contain hints of autobiography from treadwell's heritage to her extra-marital affair.[ ] below is a chronological chart of her known works. plays and novels[edit] title year background a man's own one-act written when treadwell was only years old; this play is set in an office in chicago, il and concerns economics and family matters[ ] le grand prix sophie's first full-length play[ ] the right man [ ] the settlement unpublished[ ][ ] the high cost [ ] begun in under the title constance darrow[ ] an unwritten chapter a one-act later renamed sympathy, it is a stage adaptation of the serial how i got my husband and how i lost him.[ ] sympathy was the first of treadwell's plays to be produced, in san francisco.[ ] this -character one-act is set in an apartment and the characters are jean traig, a performer and mori, her servant, and a man; the play has romantic and economic themes.[ ] guess again – -character one-act[ ] romance set in a new york apartment to him who waits – one-act[ ][ ] his luck – one-act[ ] la cachucha – one-act[ ] set in a ny apartment and the characters are dancer seniorita viviana ybarra y de la guerra, businessman john s. watkins, and musician senor alvaredos. the subject matter of the play is both domestic and romantic[ ] john doane – a one-act,[ ] featuring six characters with an abstract setting and family, romantic, and social subject matter[ ] claws – [ ] ms treadwell wrote, produced, and acted in this play's first production[ ] trance -character comedic one-act[ ] set in a house london, england. the subject is listed as family and the characters are madame de vere, charlie, and john randolphe[ ] madame bluff comedy[ ] the answer -act, -character play set in an apartment in new york city. the subject matter of the play is war and domestic matters and several of the characters in the play represent military personnel[ ] the eye of the beholder one-act, previously titled mrs. wayne.[ ][ ] treadwell copyrighted this drama in , a historical accolade for a female playwright at this time.[ ] this -character drama is set in a rural house and the play's subject matter revolves around family matters and romance.[ ] produced in at the american century theater in arlington, va[ ] rights based on the life of mary wollstonecraft[ ] gringo ran on broadway december -january ,[ ] this -act drama is set in a mine and camp in mexico and if loaded with subject matter of: violence, interracial romance (white and hispanic), family, and intellectual matters. occupations listed for this -character play include: journalist, miner, servant, homemaker, criminal, laborer, and musician.[ ] treadwell drew heavily from her recent interview with pancho villa for the content of this play.[ ] o nightingale a comedy, originally titled loney lee starred helen hayes and ran on broadway april -may .[ ][ ] treadwell also produced the shows transfer to broadway and played a supporting role onstage under the pseudonym constance elliot.[ ] machinal titled the life machine in the london premiere,[ ] premiered on broadway september -november and was revived on broadway january -march .[ ] the story of machinal is told over scenes by identifies characters.[ ] six distinct settings appear in the play: office, house, hotel, hospital, bar, courtroom, prison,[ ] the main character in the play is the 'young woman,' played in the broadway production by rebecca hall.[ ] none of the characters are named, but identified by their station or occupation. the story is loosely based on the murder trial of ruth snyder. this play has also been revived off broadway and on television and is, by far, treadwell's best known work.[ ] ladies leave a comedy, ran on broadway in october .[ ] a character play in -acts that is set in a nyc apartment which deals with the subjects of family, domestic, and social matter, as well as romance. occupations represented in the play include doctor, servant, publisher, and editor[ ] the island a comedy set in rural mexico with subject matter of mostly romantic and socially centered content. a writer, servant, and military personnel are represented among the eight characters in the play[ ] lusita a novel with a focus on women in the mexican revolution, informed by treadwell's interview with pancho villa almost a decade prior.[ ] lone valley written, staged, and produced by treadwell, ran on broadway march ,[ ] after six years of work-shopping and edits by treadwell[ ] intimations for saxophone produced in at arena stage in washington, d.c.[ ] plumes in the dust ran on broadway in november [ ] starring henry hull portraying edgar allan poe[ ] hope for a harvest an unpublished novel and a play; the play was later adapted for tv broadcast in .[ ] harvest was treadwell's last play to premier on broadway during her lifetime, it ran november–december at the guild theatre featuring fredric march and florence eldridge.[ ][ ] the genre is noted as drama, and the play is set in a rural house in treadwell's own san joaquin valley california. there are characters in the play and the subjects range from economics and social issues, to family and romance.[ ] this play is largely autobiographical, and discusses the loss of the american work ethic and problems of racism during world war ii.[ ] the attack on pearl harbor just ten days after harvest's opening, is believed to have severely affected american audiences' ability to sympathyzie with the plays message, leading to the play closing shortly thereafter.[ ] highway a -act comedy set in a restaurant in rural texas. the play features characters of white, hispanic, and american indian races with a myriad of occupations with subject matter ranging from economics and family, to health and romance.[ ] highway was produced originally in pasadena, california and remade for television broadcast in the mid- s.[ ] the last border this play is set in the federal district of mexico city, and the characters are white or hispanic. the play's subjects include violence, romance, and social issues[ ] judgement in the morning a -act play set both in the upper-east side and an upper-west side of new york city, with a multiracial cast who portray a range of socioeconomically divided characters from a lawyer and politician—to a laborer and a criminal[ ] gary a -act drama set in an upper west side apartment in new york city including topics of socioeconomic and family matters, romance, and violence . the four character are labeled as wilma a laborer, peggy a prostitute, garry a criminal, and dave a journalist; the abstract notes that the characters feature both heterosexual and bisexual orientations[ ] the world premiere of this play will be presented at the white bear theatre in london from june – , . one fierce hour and sweet a novel published by appleton-century-crofts[ ] woman with lilies [ ] treadwell's final play, produced under the title now he doesn't want to play at the university of arizona.[ ] journalism[edit] treadwell's first job as a journalist was with the san francisco bulletin, where she was hired in as a feature writer and theatre critic.[ ] she interviewed celebrities, such as jack london, and covered several high-profile murder trials.[ ] later, when living in new york, treadwell covered the murder trials of ruth snyder and judd gray whose stories influenced subsequent plays.[ ][ ] treadwell also wrote two popular serial stories for the bulletin, one based on treadwell's under cover research about charity available to women in need for which treadwell disguised herself as a homeless prostitute, the other was a fiction titled how i got my husband and how i lost him which provided the source material for her later play sympathy.[ ] treadwell traveled to france to cover the first world war; she was only female foreign correspondent writing from overseas at that time, accredited by the state department.[ ][ ] because treadwell was not permitted access to the front lines, she volunteered as a nurse and focused her writing on the effect the war was having on the women in europe. in , harper's weekly published her feature women in black.[ ] when sophie returned to new york, she was hired by the new york american, later renamed new york herald tribune where she wrote as a journalist and served as an expert on mexican-american relations and mexico.[ ][ ] in , sophie covered the end of the mexican revolution and wrote a front page piece on the flight of mexican president don venustiano carranza.[ ][ ] in , she was the only foreign journalist permitted to interview pancho villa.[ ] that two-day interview gained treadwell notoriety in the journalism field as well as provided a basis for sophie's first broadway play gringo and her later novel lusita.[ ] in , sophie spent ten months in mexico city as a correspondent for the tribune. years later, treadwell wrote for the tribune about her visit to post-war germany.[ ][ ] contemporaries and context[edit] although treadwell was writing during the height of the little theatre movement in the united states, her desire to produce her works on broadway for mainstream audiences set her apart from her contemporaries. treadwell was only peripherally involved in the movement through her work at the provincetown players during their early existence.[ ] noteworthy women playwrights writing in the same era as treadwell are:[ ][ ] zoe akins djuna barnes rachel crothers zona gale alice gerstenberg susan glaspell georgia douglas johnson edna st. vincent millay gertrude stein through the use of various 'isms' these playwrights explored new and alternative ways of presenting women's lives in their plays.[ ] treadwell remained widely unknown and un-talked about in the world of theatre scholarship until select feminist scholars resuscitated interest in her works following revivals of machinal in by the new york shakespeare festival and in by the royal national theatre in london.[ ] resources and further reading[edit] the majority of treadwell's works are stored at the university of arizona library special collections and the rest at the billy rose theatre collection at the new york public library. the rights to treadwell's works were passed on in her will to the roman catholic diocese of tucson: a corporation sole.[ ][ ] one who wishes to obtain the rights to treadwell's plays can address an enquiry to: fiscal and administrative services, diocese of tucson, po box , tucson, az . proceeds earned from the production or printing of treadwell's works are used to benefit native american children in arizona. further biographical information and critical analysis about treadwell may be found in: "broadway's bravest woman: selected writings of sophie treadwell". edited and with introductions by jerry dickey and miriam lopez-rodiriguez. southern illinois university press, . "susan glaspell and sophie treadwell". barbara ozieblo and jerry dickey. routledge, . dickey, jerry ( ). "the expressionist movement: sophie treadwell". in murphy, brenda (ed.). the cambridge companion to american women playwrights. cambridge university press. pp.  – . isbn  . all of treadwell's plays are published electronically in "north american women's drama" through the academic database publisher alexander street press. access to this resource is available by purchase directly through asp's website, or through library access at many academic institutions that have purchased a license to the database. in addition, machinal is (or was) included in the following anthologies:[ ] twenty-five best plays of the modern american theatre by john glassner- now out of print, originally published in plays by american women: – judith barlow's anthology, published in norton anthology of drama north american women's drama the routledge drama anthology and sourcebook plays and performance texts by women - manchester university press references[edit] ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am "the sophie treadwell collection". the sophie treadwell collection. special collections, university of arizona library. retrieved may , . ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah dickey, jerry; lopez-rodriguez, miriam ( ). broadway's bravest woman. southern illinois university. isbn  - - - . ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am ozieblo, barbara; dickey, jerry ( ). susan glaspell and sophie treadwell. routledge. isbn  - - - . ^ a b c d e f g h i "sophie treadwell". internet broadway database. the broadway league. retrieved march , . ^ a b c d e f g h i j k l m n o p q r s t "north american women's drama". alexander street press. missing or empty |url= (help) ^ brockett, oscar g.; hildy, franklin j. ( ). history of the theatre (foundation ed.). boston: allyn and bacon. p.  . isbn  - - - . external links[edit] wikimedia commons has media related to sophie treadwell. the sophie treadwell collection north american women's drama, alexander street the literary encyclopedia article, sophie treadwell authority control general integrated authority file (germany) isni viaf worldcat national libraries france (data) united states israel korea other faceted application of subject terminology social networks and archival context sudoc (france) retrieved from "https://en.wikipedia.org/w/index.php?title=sophie_treadwell&oldid= " categories: expressionist dramatists and playwrights modernist theatre births deaths writers from stockton, california american women dramatists and playwrights th-century american actresses american people of mexican descent american people of spanish descent american people of scottish descent american women journalists th-century american dramatists and playwrights th-century american women writers journalists from california people from newtown, connecticut th-century american non-fiction writers hidden categories: cs errors: requires url articles with short description short description is different from wikidata articles lacking page references from october use mdy dates from october commons category link is on wikidata wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with bnf identifiers wikipedia articles with lccn identifiers wikipedia articles with nli identifiers wikipedia articles with nlk identifiers wikipedia articles with fast identifiers wikipedia articles with snac-id identifiers wikipedia articles with sudoc identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages afrikaans עברית مصرى Русский edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement inclusive terminology guide glossary - carissa chew - nls - . .pdf - google drive sign in news – duraspace.org news – duraspace.org fedora migration paths and tools project update: july this is the latest in a series of monthly updates on the fedora migration paths and tools project – please see the previous post for a summary of the work completed up to that point. this project has been generously funded by the imls. we completed some final performance tests and optimizations for the university... read more &# ; the post fedora migration paths and tools project update: july appeared first on duraspace.org. all aboard for fedora . as you may have heard, earlier this month the fedora . release candidate was announced, which means we are moving full steam ahead toward an official full production release of the software. after long years of laying down the tracks to guide us toward a shiny new fedora, this train is nearly ready to... read more &# ; the post all aboard for fedora . appeared first on duraspace.org. fedora migration paths and tools project update: may this is the eighth in a series of monthly updates on the fedora migration paths and tools project – please see last month’s post for a summary of the work completed up to that point. this project has been generously funded by the imls. the university of virginia has completed their data migration and successfully... read more &# ; the post fedora migration paths and tools project update: may appeared first on duraspace.org. fedora migration paths and tools project update: april this is the seventh in a series of monthly updates on the fedora migration paths and tools project – please see last month’s post for a summary of the work completed up to that point. this project has been generously funded by the imls. born digital has set up both staging and production servers for... read more &# ; the post fedora migration paths and tools project update: april appeared first on duraspace.org. meet the members welcome to the first in a series of blog posts aimed at introducing you to some of the movers and shakers who work tirelessly to advocate, educate and promote fedora and other community-supported programs like ours. at fedora, we are strong because of our people and without individuals like this advocating for continued development we... read more &# ; the post meet the members appeared first on duraspace.org. fedora migration paths and tools project update: january this is the fourth in a series of monthly updates on the fedora migration paths and tools project – please see last month’s post for a summary of the work completed up to that point. this project has been generously funded by the imls. the grant team has been focused on completing an initial build... read more &# ; the post fedora migration paths and tools project update: january appeared first on duraspace.org. fedora migration paths and tools project update: december this is the third in a series of monthly updates on the fedora migration paths and tools project – please see last month’s post for a summary of the work completed up to that point. this project has been generously funded by the imls. the principal investigator, david wilcox, participated in a presentation for cni... read more &# ; the post fedora migration paths and tools project update: december appeared first on duraspace.org. fedora alpha release is here today marks a milestone in our progress toward fedora &# ; the alpha release is now available for download and testing! over the past year, our dedicated fedora team, along with an extensive list of active community members and committers, have been working hard to deliver this exciting release to all of our users. so... read more &# ; the post fedora alpha release is here appeared first on duraspace.org. fedora migration paths and tools project update: october this is the first in a series of monthly blog posts that will provide updates on the imls-funded fedora migration paths and tools: a pilot project. the first phase of the project began in september with kick-off meetings for each pilot partner: the university of virginia and whitman college. these meetings established roles and responsibilities... read more &# ; the post fedora migration paths and tools project update: october appeared first on duraspace.org. fedora in the time of covid- the impacts of coronavirus disease are being felt around the world, and access to digital materials is essential in this time of remote work and study. the fedora community has been reflecting on the value of our collective digital repositories in helping our institutions and researchers navigate this unprecedented time.  many member institutions have... read more &# ; the post fedora in the time of covid- appeared first on duraspace.org. subspace: a solution to the farmer's dilemma.pdf - google drive sign in proof of stake - wikipedia proof of stake from wikipedia, the free encyclopedia jump to navigation jump to search proof of stake (pos) protocols are a class of consensus mechanisms for blockchains that work by selecting validators in proportion to their quantity of holdings in the associated cryptocurrency. unlike a proof of work (pow) protocol, pos systems do not incentivize extreme amounts of energy consumption. the first functioning use of pos for cryptocurrency was peercoin in . the biggest proof-of-stake blockchain by "market cap” is cardano. contents description attacks variants . variations of stake definition . delegated proof of stake implementations references description for a blockchain transaction to be recognized, it must be appended to the blockchain. validators carry out this appending; in most protocols, they receive a reward for doing so.[ ] for the blockchain to remain secure, it must have a mechanism to prevent a malicious user or group from taking over a majority of validation. pos accomplishes this by requiring that validators have some quantity of blockchain tokens, requiring potential attackers to acquire a large fraction of the tokens on the blockchain to mount an attack.[ ] proof of work, another commonly used consensus mechanism, uses a validation of computational prowess to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network.[ ] this incentivizes consuming huge quantities of energy. pos is tremendously more energy-efficient.[ ] in , elon musk and bill gates were seen as damaging sentiment towards proof-of-work blockchains such as bitcoin and ethereum by publicising their massive energy consumption. the efficiency of proof-of-stake coins such as cardano, eos, bitgreen and stellar led to them being described as "green coins".[ ][ ][ ][ ] attacks pos protocols can suffer from the nothing-at-stake problem, where validator nodes validate conflicting copies of the blockchain because there is minimal cost to doing so, and a smaller chance of losing out on rewards by validating a block on the wrong chain. if this persists, it can allow double-spending.[ ] this can be mitigated through penalizing validators who validate conflicting chains[ ] or by structuring the rewards so that there is no economic incentive to create conflicts.[ ] variants variations of stake definition the exact definition of "stake" varies from implementation to implementation. for instance, some cryptocurrencies use the concept of "coin age", the product of the number of tokens with the amount of time that a single user has held them, rather than merely the number of tokens, to define a validator's stake.[ ] delegated proof of stake delegated proof of stake (dpos) systems separate the roles of the stake-holders and validators, by allowing stake-holders to delegate the validation role.[ ] implementations the first functioning implementation of a proof-of-stake cryptocurrency was peercoin, introduced in .[ ] other cryptocurrencies, such as blackcoin, nxt, cardano, and algorand followed.[ ] however, as of [update], pos cryptocurrencies were still not as widely used as proof-of-work cryptocurrencies.[ ] the biggest proof-of-stake blockchains by “market cap” in were cardano, polkadot and solana. other prominent pos platforms include avalanche,[ ] tron, eos, algorand, and tezos.[ ][ ][ ] there have been repeated proposals for ethereum to switch from a pow to pos mechanism.[ ][ ] in april , the ethereum foundation announced that it planned to switch to a pos system by the end of .[ ] however, switching to a pos system is a substantial change, and progress has not been steady. william entriken, an ethereum developer, said: "you have to switch to proof of stake. proof of work should be illegal." however, the change has "always been three months away. these things don't just happen immediately."[ ] references ^ a b c d saleh, fahad ( - - ). "blockchain without waste: proof-of-stake". the review of financial studies. ( ): – . doi: . /rfs/hhaa . issn  - . ^ a b c tasca, paolo; tessone, claudio j. ( - - ). "a taxonomy of blockchain technologies: principles of identification and classification". ledger. . doi: . /ledger. . . issn  - . ^ zhang, rong; chan, wai kin (victor) ( ). "evaluation of energy consumption in block-chains with proof of work and proof of stake". journal of physics: conference series. ( ): . bibcode: jphcs a z. doi: . / - / / / . issn  - . ^ leyes, kevin (june , ). "elon musk's tweet radically changed the crypto game". entrepreneur.com. ^ sorkin, andrew ross (march , ). "why bill gates is worried about bitcoin". the new york times. ^ partridge, joanna (june , ). "bitcoin price back above $ , after elon musk comments". the guardian. ^ kaplan, ezra (may , ). "cryptocurrency goes green: could 'proof of stake' offer a solution to energy concerns?". nbc news. ^ a b c xiao, y.; zhang, n.; lou, w.; hou, y. t. ( ). "a survey of distributed consensus protocols for blockchain networks". ieee communications surveys and tutorials. ( ): – . arxiv: . . doi: . /comst. . . issn  - x. s cid  . ^ li, wenting; andreina, sébastien; bohli, jens-matthias; karame, ghassan ( ). "securing proof-of-stake blockchain protocols". in garcia-alfaro, joaquin; navarro-arribas, guillermo; hartenstein, hannes; herrera-joancomartí, jordi (eds.). data privacy management, cryptocurrencies and blockchain technology. lecture notes in computer science. cham: springer international publishing. pp.  – . doi: . / - - - - _ . isbn  - - - - . ^ gecgil, tezcan. " cryptos to buy for their potentially profitable partnerships". www.nasdaq.com. retrieved - - . ^ ashworth, will (july , ). "solana vs. cardano: which is the better ethereum killer?". investor place. ^ hissong, samantha (july , ). "the crypto world is getting greener. is it too little too late?". rolling stone. ^ nguyen, cong t.; hoang, dinh thai; nguyen, diep n.; niyato, dusit; nguyen, huynh tuong; dutkiewicz, eryk ( ). "proof-of-stake consensus mechanisms for future blockchain networks: fundamentals, applications and opportunities". ieee access. : – . doi: . /access. . . ^ a b sparkes, matthew ( - - ). "nft developers say cryptocurrencies must tackle their carbon emissions". new scientist. doi: . /s - ( ) - . retrieved - - . ^ a b lau, yvonne ( - - ). "ethereum founder vitalik buterin says long-awaited shift to 'proof-of-stake' could solve environmental woes". forbes. retrieved - - . v t e cryptocurrencies technology blockchain cryptocurrency tumbler cryptocurrency exchange cryptocurrency wallet cryptographic hash function decentralized exchange decentralized finance distributed ledger fork lightning network metamask non-fungible token smart contract consensus mechanisms proof of authority proof of personhood proof of space proof of stake proof of work proof of work currencies sha- -based bitcoin bitcoin cash counterparty lbry mazacoin namecoin peercoin titcoin ethash-based ethereum ethereum classic scrypt-based auroracoin bitconnect coinye dogecoin litecoin equihash-based bitcoin gold zcash randomx-based monero x -based dash petro other ambacoin firo iota primecoin verge vertcoin proof of stake currencies algorand cardano eos.io gridcoin nxt peercoin polkadot steem tezos tron erc- tokens augur aventus bancor basic attention token chainlink kin kodakcoin minds the dao uniswap stablecoins dai diem tether usd coin other currencies ankr chia filecoin hbar (hashgraph) nano neo ripple safemoon stellar whoppercoin related topics airdrop bitlicense blockchain game complementary currency crypto-anarchism cryptocurrency bubble cryptocurrency scams digital currency distributed ledger technology law double-spending hyperledger initial coin offering initial exchange offering initiative q list of cryptocurrencies token money virtual currency category commons list retrieved from "https://en.wikipedia.org/w/index.php?title=proof_of_stake&oldid= " categories: cryptography cryptocurrencies hidden categories: wikipedia extended-confirmed-protected pages articles containing potentially dated statements from all articles containing potentially dated statements navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read view source view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages Čeština deutsch eesti español فارسی français 한국어 italiano 日本語 polski português Русский ไทย türkçe Українська 中文 edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement netscape navigator - wikipedia netscape navigator from wikipedia, the free encyclopedia jump to navigation jump to search web browser this article is about the original netscape navigator product (versions to . ). for the final release, see netscape navigator . for a full list of netscape software releases, see netscape (web browser). netscape navigator netscape navigator . developer(s) netscape initial release  december ; years ago ( - - )[ ] final release . . . /  february ; years ago ( - - ) ➔ back to article "netscape navigator" type web browser website archive.netscape.com  netscape navigator was a proprietary web browser, and the original browser of the netscape line, from versions to . , and .x. it was the flagship product of the netscape communications corp and was the dominant web browser in terms of usage share in the s, but by around its use had almost disappeared.[ ] this was partly because the netscape corporation (later purchased by aol) did not sustain netscape navigator's technical innovation in the late s.[ ] the business demise of netscape was a central premise of microsoft's antitrust trial, wherein the court ruled that microsoft's bundling of internet explorer with the windows operating system was a monopolistic and illegal business practice. the decision came too late for netscape, however, as internet explorer had by then become the dominant web browser in windows. the netscape navigator web browser was succeeded by the netscape communicator suite in . netscape communicator's .x source code was the base for the netscape-developed mozilla application suite, which was later renamed seamonkey.[ ] netscape's mozilla suite also served as the base for a browser-only spinoff called mozilla firefox. the netscape navigator name returned in when aol announced version of the netscape series of browsers, netscape navigator . on december , aol canceled its development but continued supporting the web browser with security updates until march . aol allows downloading of archived versions of the netscape navigator web browser family. aol maintains the netscape website as an internet portal.[ ] contents history and development . origin . rise of netscape . decline legacy see also references external links history and development[edit] origin[edit] mosaic netscape . , a pre- . version, with image of the mozilla mascot, and the mosaic logo in the top-right corner. netscape navigator was inspired by the success of the mosaic web browser, which was co-written by marc andreessen, a part-time employee of the national center for supercomputing applications at the university of illinois. after andreessen graduated in , he moved to california and there met jim clark, the recently departed founder of silicon graphics. clark believed that the mosaic browser had great commercial possibilities and provided the seed money. soon mosaic communications corporation was in business in mountain view, california, with andreessen as a vice-president. since the university of illinois was unhappy with the company's use of the mosaic name, the company changed its name to netscape communications (suggested by product manager greg sands[ ]) and named its flagship web browser netscape navigator. netscape announced in its first press release ( october ) that it would make navigator available without charge to all non-commercial users, and beta versions of version . and . were indeed freely downloadable in november and march , with the full version . available in december . netscape's initial corporate policy regarding navigator claimed that it would make navigator freely available for non-commercial use in accordance with the notion that internet software should be distributed for free.[ ] however, within two months of that press release, netscape apparently reversed its policy on who could freely obtain and use version . by only mentioning that educational and non-profit institutions could use version . at no charge.[ ] the reversal was complete with the availability of version . beta on march , in which a press release states that the final . release would be available at no cost only for academic and non-profit organizational use. gone was the notion expressed in the first press release that navigator would be freely available in the spirit of internet software. some security experts and cryptographers found out that all released netscape versions had major security problems with crashing the browser with long urls and bits encryption keys.[ ][ ] the first few releases of the product were made available in "commercial" and "evaluation" versions; for example, version " . " and version " . n". the "n" evaluation versions were completely identical to the commercial versions; the letter was there to remind people to pay for the browser once they felt they had tried it long enough and were satisfied with it. this distinction was formally dropped within a year of the initial release, and the full version of the browser continued to be made available for free online, with boxed versions available on floppy disks (and later cds) in stores along with a period of phone support. during this era, "internet starter kit" books were popular, and usually included a floppy disk or cd containing internet software, and this was a popular means of obtaining netscape's and other browsers.[ ] email support was initially free and remained so for a year or two until the volume of support requests grew too high. during development, the netscape browser was known by the code name mozilla, which became the name of a godzilla-like cartoon dragon mascot used prominently on the company's web site. the mozilla name was also used as the user-agent in http requests by the browser. other web browsers claimed to be compatible with netscape's extensions to html and therefore used the same name in their user-agent identifiers so that web servers would send them the same pages as were sent to netscape browsers. mozilla is now a generic name for matters related to the open source successor to netscape communicator and is most identified with the browser firefox. rise of netscape[edit] when the consumer internet revolution arrived in the mid- s, netscape was well-positioned to take advantage of it. with a good mix of features and an attractive licensing scheme that allowed free use for non-commercial purposes, the netscape browser soon became the de facto standard, particularly on the windows platform. internet service providers and computer magazine publishers helped make navigator readily available. an innovation that netscape introduced in was the on-the-fly display of web pages, where text and graphics appeared on the screen as the web page downloaded. earlier web browsers would not display a page until all graphics on it had been loaded over the network connection; this meant a user might have only a blank page for several minutes. with netscape, people using dial-up connections could begin reading the text of a web page within seconds of entering a web address, even before the rest of the text and graphics had finished downloading. this made the web much more tolerable to the average user. through the late s, netscape made sure that navigator remained the technical leader among web browsers. new features included cookies, frames,[ ] proxy auto-config,[ ] and javascript (in version . ). although those and other innovations eventually became open standards of the w c and ecma and were emulated by other browsers, they were often viewed as controversial. netscape, according to critics, was more interested in bending the web to its own de facto "standards" (bypassing standards committees and thus marginalizing the commercial competition) than it was in fixing bugs in its products. consumer rights advocates were particularly critical of cookies and of commercial web sites using them to invade individual privacy. in the marketplace, however, these concerns made little difference. netscape navigator remained the market leader with more than % usage share. the browser software was available for a wide range of operating systems, including windows ( . , , , nt), macintosh, linux, os/ ,[ ] and many versions of unix including osf/ , sun solaris, bsd/os, irix, aix, and hp-ux, and looked and worked nearly identically on every one of them. netscape began to experiment with prototypes of a web-based system, known internally as "constellation", which would allow a user to access and edit his or her files anywhere across a network no matter what computer or operating system he or she happened to be using.[ ] industry observers forecast the dawn of a new era of connected computing. the underlying operating system, it was believed, would not be an important consideration; future applications would run within a web browser. this was seen by netscape as a clear opportunity to entrench navigator at the heart of the next generation of computing, and thus gain the opportunity to expand into all manner of other software and service markets. decline[edit] this section needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. find sources: "netscape navigator" – news · newspapers · books · scholar · jstor (december ) (learn how and when to remove this template message) usage share of netscape navigator, – with the success of netscape showing the importance of the web (more people were using the internet due in part to the ease of using netscape), internet browsing began to be seen as a potentially profitable market. following netscape's lead, microsoft started a campaign to enter the web browser software market. like netscape before them, microsoft licensed the mosaic source code from spyglass, inc. (which in turn licensed code from university of illinois). using this basic code, microsoft created internet explorer (ie). the competition between microsoft and netscape dominated the browser wars. internet explorer, version . (shipped in the internet jumpstart kit in microsoft plus! for windows [ ]) and ie, version . (the first cross-platform version of the web browser, supporting both windows and mac os[ ]) were thought by many to be inferior and primitive when compared to contemporary versions of netscape navigator. with the release of ie version . ( ) microsoft was able to catch up with netscape competitively, with ie version . ( ) further improvement in terms of market share. ie . ( ) improved stability and took significant market share from netscape navigator for the first time. there were two versions of netscape navigator . , the standard edition and the gold edition. the latter consisted of the navigator browser with e-mail, news readers, and a wysiwyg web page compositor; however, these extra functions enlarged and slowed the software, rendering it prone to crashing. this gold edition was renamed netscape communicator starting with version . ; the name change diluted its name-recognition and confused users. netscape ceo james l. barksdale insisted on the name change because communicator was a general-purpose client application, which contained the navigator browser. the aging netscape communicator .x was slower than internet explorer . . typical web pages had become heavily illustrated, often javascript-intensive, and encoded with html features designed for specific purposes but now employed as global layout tools (html tables, the most obvious example of this, were especially difficult for communicator to render). the netscape browser, once a solid product, became crash-prone and buggy; for example, some versions re-downloaded an entire web page to re-render it when the browser window was re-sized (a nuisance to dial-up users), and the browser would usually crash when the page contained simple cascading style sheets, as proper support for css never made it into communicator .x. at the time that communicator . was being developed, netscape had a competing technology called javascript style sheets. near the end of the development cycle, it became obvious that css would prevail, so netscape quickly implemented a css to jsss converter, which then processed css as jsss (this is why turning javascript off also disabled css). moreover, netscape communicator's browser interface design appeared dated in comparison to internet explorer and interface changes in microsoft and apple's operating systems. by the end of the decade, netscape's web browser had lost dominance over the windows platform, and the august microsoft financial agreement to invest one hundred and fifty million dollars in apple required that apple make internet explorer the default web browser in new mac os distributions. the latest ie mac release at that time was internet explorer version . for macintosh, but internet explorer was released later that year. microsoft succeeded in having isps and pc vendors distribute internet explorer to their customers instead of netscape navigator, mostly due to microsoft using its leverage from windows oem licenses, and partly aided by microsoft's investment in making ie brandable, such that a customized version of ie could be offered. also, web developers used proprietary, browser-specific extensions in web pages. both microsoft and netscape did this, having added many proprietary html tags to their browsers, which forced users to choose between two competing and almost incompatible web browsers. in march , netscape released most of the development code base for netscape communicator under an open source license.[ ] only pre-alpha versions of netscape were released before the open source community decided to scrap the netscape navigator codebase entirely and build a new web browser around the gecko layout engine which netscape had been developing but which had not yet incorporated. the community-developed open source project was named mozilla, netscape navigator's original code name. america online bought netscape; netscape programmers took a pre-beta-quality form of the mozilla codebase, gave it a new gui, and released it as netscape . this did nothing to win back users, who continued to migrate to internet explorer. after the release of netscape and a long public beta test, mozilla . was released on june . the same code-base, notably the gecko layout engine, became the basis of independent applications, including firefox and thunderbird. on december , the netscape developers announced that aol had canceled development of netscape navigator, leaving it unsupported as of march .[ ][ ] archived and unsupported versions of the browser remain available for download. legacy[edit] netscape's contributions to the web include javascript, which was submitted as a new standard to ecma international. the resultant ecmascript specification allowed javascript support by multiple web browsers and its use as a cross-browser scripting language, long after netscape navigator itself had dropped in popularity. another example is the frame tag, which is widely supported today, and that has been incorporated into official web standards such as the "html . frameset" specification. in a pc world column, the original netscape navigator was considered the "best tech product of all time" due to its impact on the internet.[ ] see also[edit] timeline of web browsers comparison of web browsers list of web browsers netscape mosaic mozilla lou montulli references[edit] ^ "netscape ceo barksdale's deposition in microsoft suit (text)". bloomberg.com. october . retrieved december . ^ "roads and crossroads of the internet history". netvalley.com. archived from the original on february . retrieved september . ^ netscape's brief history archived february at the wayback machine retrieved on - - ^ clark, jim ( ). netscape time. st. martin's press. ^ tom drapeau ( december ). "end of support for netscape web browsers". the netscape blog. archived from the original on december . retrieved december . ^ "greg sands". retrieved february . ^ "netscape communications offers new network navigator free on the internet". aol.com. archived from the original on december . retrieved september . ^ "netscape communications ships release . of netscape navigator and netscape servers" (press release). aol.com. september . archived from the original on march . retrieved september . ^ demailly, laurent ( july ). "netscape (in)security (problems)". archived from the original on january . retrieved october . ^ "hackers alert netscape to another flaw". the new york times. september . p. d . archived from the original on january . retrieved october . ^ mark robbin brown; steven forrest burnett; tim evans; heather fleming; galen grimes; david gunter; jerry honeycutt; peter kent; margaret j. larson; bill nadeau; todd stauffer; ian stokell; john williams ( ). netscape navigator starter kit. que. isbn  - - - - . ^ ladd, eric. "using html . , java . , and cgi; ch. , frames". archived from the original on october . ^ "navigator proxy auto-config file format". netscape navigator documentation. march . archived from the original on december . retrieved january . ^ watson, dave ( july ). "a quick look at netscape". the southern california os/ user group. archived from the original on july . retrieved august . ^ gordon, john ( december ). "why google loves chrome: netscape constellation". gordon's notes. retrieved december . ^ "download web browser - internet explorer". windows.microsoft.com. microsoft. archived from the original on october . retrieved september . ^ "microsoft internet explorer web browser available on all major platforms, offers broadest international support" (press release). microsoft. april . archived from the original on august . retrieved september . ^ hamerly, jim (january ). "freeing the source: the story of mozilla". o'reilly. archived from the original on december . retrieved september . ^ bbc news - technology - final goodbye for early web icon bbc news archived march at the wayback machine retrieved february ^ "curtains for netscape - tech bytes". canadian broadcasting company. archived from the original on july . retrieved june . ^ "the best tech products of all time". pcworld. april . archived from the original on september . retrieved september . external links[edit] notice for netscape navigator . for os/ and netscape communicator . for os/ users the hidden features of netscape navigator . netscape browser archive - early netscape, sillydog preceded by first netscape navigator ( - . ) succeeded by netscape communicator ( ) v t e gopher a protocol for document search and retrieval on the internet active clients dooble lynx w m webpositive discontinued clients agora arachne amaya arena at&t pogo beonex communicator camino cello classilla conkeror cyberjack elinks epiphany galeon gophervr ibrowse internet explorer internet explorer for mac kazehakase libwww line mode browser minimo minuet mosaic mothra mozilla application suite netscape omniweb slipknot songbird tkwww udiwww xb browser previously supported epiphany firefox flock internet explorer seamonkey server software bucktooth netpresenz pygopherd squid synchronet search engines jughead veronica wide area information server (wais) content allmusic ccso nameserver gophermap phlog hosts sdf public access unix system the well people john goerzen mark p. mccahill v t e netscape browser versions mosaic netscape netscape navigator netscape communicator netscape netscape browser netscape navigator netscape plugin application programming interface (npapi) e-mail clients netscape mail & newsgroups netscape messenger other components netscape composer server software netscape enterprise server netscape application server netscape proxy server netscape directory server netscape server application programming interface (nsapi) web services netscape.com propeller.com open directory project netscape isp people eric j. bina james h. clark brendan eich daniel glazman jamie zawinski lou montulli marc andreessen eric a. meyer mitchell baker see also gecko jsss mariner netscape netscape public license mozilla aol iplanet v t e timeline of web browsers general comparison lightweight history list for unix usage share s worldwideweb (nexus) line mode browser libwww erwise macwww (samba) midaswww tkwww violawww amosaic arena cello emacs/w lynx ncsa mosaic vms mosaic airmosaic internet in a box ant fresco argo ibm webexplorer slipknot minuet navipress mosaic/mosaic netscape/netscape navigator spyglass mosaic tcp/connect ii agora alynx amsd ariadna cyberjack eworld web browser grail internet explorer internet explorer netscape navigator netshark omniweb hotjava udiwww webshark w m cyberdog arachne aweb ibrowse amaya internet explorer netscape navigator opera oracle powerbrowser tcpconnect voyager netscape communicator internet explorer opera . neoplanet mozilla application suite opera – . icab internet explorer omniweb opera . – . s beonex communicator galeon k-meleon mediabrowser netscape opera – . icab . internet explorer omniweb opera – . avant browser camino epiphany netscape opera – . greenbrowser maxthon opera – . safari slimbrowser avant browser phoenix/firebird/firefox opera . – . aol explorer deepnet explorer firefox . opera – . safari avant browser firefox internet explorer opera – . maxthon opera . – . safari seamonkey . chrome firefox netscape browser netscape navigator netsurf . avant browser . chrome – firefox . internet explorer opera – . pale moon safari seamonkey . s chrome – firefox . lunascape . . maxthon netsurf . opera – . safari chrome – firefox – internet explorer lunascape . opera – . seamonkey . – . waterfox chrome – firefox – internet explorer lunascape . maxthon . netsurf . safari yandex browser chrome – firefox – internet explorer opera – safari seamonkey . – . chrome – firefox – lunascape . netsurf . opera – safari seamonkey . – . chrome – firefox – lunascape . – . microsoft edge , opera – safari seamonkey . – . vivaldi blisk . . – . . chrome – firefox – lunascape . – . microsoft edge opera – safari yandex . – . chrome – firefox – microsoft edge , opera – safari seamonkey . – . basilisk brave vivaldi . – . blisk . . – . . chrome – firefox – microsoft edge – opera – safari waterfox . – . yandex . – . blisk . – . . chrome – firefox – opera – safari vivaldi . – . waterfox . – . a whale . – . yandex . – . s blisk . brave . – . chrome – firefox – microsoft edge – opera – seamonkey . vivaldi . – . safari blisk . . . — brave . — chrome — firefox — microsoft edge — opera — seamonkey . . vivaldi . — yandex . . — related topics d markup language for web aliweb arpanet ascii bitnet browser wars compuserve elm email file transfer protocol gopher html hypercard hytelnet ncsa telnet nls prodigy teletext telnet usenet uucp videotex viewdata virtual reality markup language web page whole internet user's guide and catalog world wide web x. retrieved from "https://en.wikipedia.org/w/index.php?title=netscape_navigator&oldid= " categories: latest preview software release templates netscape software cross-platform web browsers discontinued web browsers gopher clients history of web browsers os/ web browsers posix web browsers hidden categories: webarchive template wayback links articles with short description short description matches wikidata use dmy dates from january articles needing additional references from december all articles needing additional references navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية azərbaycanca bân-lâm-gú català Čeština deutsch español esperanto فارسی français galego 한국어 हिन्दी hrvatski bahasa indonesia interlingua Íslenska italiano עברית ಕನ್ನಡ latviešu magyar मराठी bahasa melayu nederlands 日本語 norsk bokmål polski português română Русский simple english slovenčina slovenščina Српски / srpski suomi svenska தமிழ் ไทย Українська vèneto tiếng việt 粵語 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement json search syntax solarwinds uses cookies on its websites to make your online experience easier and better. by using our website, you consent to our use of cookies. for more information on cookies, see our cookie policy. continue visit solarwinds.com documentation contact us customer portal toggle navigation academy solarwinds academy classes guided curriculum elearning certification solarwinds academy the solarwinds academy offers education resources to learn more about your product. the curriculum provides a comprehensive understanding of our portfolio of products through virtual classrooms, elearning videos, and professional certification. see what's offered available resources virtual classrooms calendar elearning video index solarwinds certified professional program virtual classrooms attend virtual classes on your product and a wide array of topics with live instructor sessions or watch on-demand videos to help you get the most out of your purchase. find a class open sessions and popular classes general office hours orion platform network performance monitor netflow traffic analyzer see all ip address manager network configuration manager server & application monitor virtualization manager guided curriculum whether learning a newly-purchased solarwinds product or finding information to optimize the software you already own, we have guided product training paths that help get customers up to speed quickly. view suggested paths elearning on-demand videos on installation, optimization, and troubleshooting. see all videos popular videos upgrading isn't as daunting as you may think upgrading your orion platform deployment using microsoft azure upgrading from the orion platform . to . don't let the gotchas get you how to install npm and other orion platform products upgrading the orion platform see all videos navigating the web console prepare a sam installation installing server & application monitor how to install sem on vmware customer success with the solarwinds support community new job, new to solarwinds? solarwinds certified professional program become a solarwinds certified professional to demonstrate you have the technical expertise to effectively set up, use, and maintain solarwinds’ products. learn more study aids access rights manager architecture and design database performance analyzer diagnostics netflow traffic analyzer network configuration manager network performance monitor server & application monitor onboarding & upgrading new to solarwinds orion assistance program upgrade resource center support offerings smartstart what’s new upgrade resource center see helpful resources, answers to frequently asked questions, available assistance options, and product-specific details to make your upgrade go quickly and smoothly. visit the upgrade resource center product-specific upgrade resources network performance monitor netflow traffic analyzer network configuration manager server & application monitor storage resource monitor virtualization manager web performance monitor log analyzer orion assistance program this program connects you with professional consulting resources who are experienced with the orion platform and its products. these services are provided at no additional charge for customers who were/are running one of the orion platform versions affected by sunburst or supernova. learn more support offerings our customer support plans provide assistance to install, upgrade, and troubleshoot your product. choose what best fits your environment and organization, and let us help you get the most out of your purchase. we support all our products, / / . learn more available programs professional premier premier enterprise smartstart our smartstart programs help you install and configure or upgrade your product. get assistance from solarwinds’ technical support experts with our onboarding and upgrading options. we also offer a self-led program for network performance monitor (npm) and server & application monitor (sam) if you need help doing it yourself. learn more available programs smartstart for onboarding smartstart for upgrading smartstart self-led for npm and sam what’s new at solarwinds find the latest release notes, system requirements, and links to upgrade your product. learn more new to solarwinds you just bought your first product. now what? find out more about how to get the most out of your purchase. from installation and configuration to training and support, we've got you covered. learn more support offerings premier support smartstart working with support premier support we offer paid customer support programs to assist you with installation, upgrading and troubleshooting. choose what best fits your environment and budget to get the most out of your software. get priority call queuing and escalation to an advanced team of support specialist. available programs premier support premier enterprise support smartstart our smartstart paid programs are intended help you install and configure or upgrade your product. you’ll be assisted by solarwinds’ technical support experts who are dedicated to quickly and efficiently help you with getting up and running or moving to the latest version of your product. available programs smartstart for onboarding smartstart for upgrading working with support working with support a glossary of support availability, tips, contact info, and customer success resources. we're here to help. learn more products network management systems management database management it security it service management application management documentation network management orion platform network performance monitor netflow traffic analyzer ip address manager network configuration manager engineer's toolset view all network management products network topology mapper user device tracker voip network quality manager log analyzer enterprise operations console your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class systems management server & application monitor virtualization manager storage resource monitor web performance monitor server configuration monitor backup view all systems management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class it security security event manager access rights manager serv-u managed file transfer server serv-u ftp server patch manager identity monitor view all it security products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class database management database performance analyzer database performance monitor view all database management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class it service management dameware remote everywhere dameware remote support dameware mini remote control service desk web help desk view all it service management products kiwi syslog server kiwi cattools ipmonitor mobile admin your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class application management appoptics pingdom papertrail loggly view all application management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class community thwack® orange matter logicalread thwack® over , users—get help, be heard, improve your product skills visit thwack available programs solarwinds lab thwack tuesday tips (ttt) thwackcamp™ on-demand orange matter practical advice on managing it infrastructure from up-and-coming industry voices and well-known tech leaders view orange matter logicalread blog articles, code, and a community of database experts read the blog submit a ticket academy solarwinds academy see what's offered virtual classrooms calendar elearning video index solarwinds certified professional program classes find a class general office hours orion platform network performance monitor netflow traffic analyzer see all ip address manager network configuration manager server & application monitor virtualization manager guided curriculum view suggested paths elearning see all videos upgrading isn't as daunting as you may think upgrading your orion platform deployment using microsoft azure upgrading from the orion platform . to . don't let the gotchas get you how to install npm and other orion platform products upgrading the orion platform see all videos navigating the web console prepare a sam installation installing server & application monitor how to install sem on vmware customer success with the solarwinds support community new job, new to solarwinds? certification learn more access rights manager architecture and design database performance analyzer diagnostics netflow traffic analyzer network configuration manager network performance monitor server & application monitor onboarding & upgrading new to solarwinds learn more orion assistance program learn more upgrade resource center visit the upgrade resource center network performance monitor netflow traffic analyzer network configuration manager server & application monitor storage resource monitor virtualization manager web performance monitor log analyzer support offerings learn more professional premier premier enterprise smartstart learn more smartstart for onboarding smartstart for upgrading smartstart self-led for npm and sam what’s new learn more support offerings premier support premier support premier enterprise support smartstart smartstart for onboarding smartstart for upgrading working with support working with support learn more products network management orion platform network performance monitor netflow traffic analyzer ip address manager network configuration manager engineer's toolset view all network management products network topology mapper user device tracker voip network quality manager log analyzer enterprise operations console systems management server & application monitor virtualization manager storage resource monitor web performance monitor server configuration monitor backup view all systems management products it security security event manager access rights manager serv-u managed file transfer server serv-u ftp server patch manager identity monitor view all it security products database management database performance analyzer database performance monitor view all database management products it service management dameware remote everywhere dameware remote support dameware mini remote control service desk web help desk view all it service management products kiwi syslog server kiwi cattools ipmonitor mobile admin application management appoptics pingdom papertrail loggly view all application management products documentation community thwack® visit thwack orange matter view orange matter logicalread read the blog submit a ticket documentation forpapertrail json search syntax in addition to using the google-esque search syntax to find things in your logs, papertrail can parse a json object that appears at the end of a log line. each line can contain arbitrary string data before the json. for example: copy - - : : debug {"a": ,"b": } this is a beta feature and the final syntax might change. if you have any questions or suggestions, please contact contact us. json search syntax root level search copy json.orgid: example matches: exact match { "orgid": } substring match { "orgid": } nested search copy json.user.name:pete example matches: exact match { "user": {"name": "pete"} } substring match { "user": {"name": "peter" } } exact match copy json.orgid:" " example matches: exact match { "orgid": } negation copy json.cursor.tail:false and -json.orgid: example matches: different value for orgid { "orgid": , "cursor": {"tail": false} } orgid not present { "cursor": {"tail": false} } the scripts are not supported under any solarwinds support program or service. the scripts are provided as is without warranty of any kind. solarwinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. the risk arising out of the use or performance of the scripts and documentation stays with you. in no event shall solarwinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation. we’re geekbuilt.® developed by network and systems engineers who know what it takes to manage today's dynamic it environments, solarwinds has a deep connection to the it community. the result? it management products that are effective, accessible, and easy to use. company investors career center resource center email preference center for customers for government gdpr resource center solarwinds trust center legal documents privacy california privacy rights security information documentation & uninstall information sitemap © solarwinds worldwide, llc. all rights reserved. an alternative approach to nucleic acid memory | nature communications skip to main content thank you for visiting nature.com. you are using a browser version with limited support for css. to obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in internet explorer). in the meantime, to ensure continued support, we are displaying the site without styles and javascript. advertisement view all journals search my account login explore content journal information publish with us sign up for alerts rss feed nature nature communications articles article an alternative approach to nucleic acid memory download pdf article open access published: april an alternative approach to nucleic acid memory george d. dickinson  orcid: orcid.org/ - - -  na , golam md mortuza  orcid: orcid.org/ - - -  na , william clay  na , luca piantanida  orcid: orcid.org/ - - -  na , christopher m. green  orcid: orcid.org/ - - -  naff , chad watson , eric j. hayden  orcid: orcid.org/ - - - , tim andersen  orcid: orcid.org/ - - - , wan kuang , elton graugnard  orcid: orcid.org/ - - - , reza zadegan  orcid: orcid.org/ - - -  naff & william l. hughes  orcid: orcid.org/ - - -   nature communications volume  , article number:  ( ) cite this article accesses citations altmetric metrics details subjects dna computing and cryptography dna nanostructures information storage super-resolution microscopy abstract dna is a compelling alternative to non-volatile information storage technologies due to its information density, stability, and energy efficiency. previous studies have used artificially synthesized dna to store data and automated next-generation sequencing to read it back. here, we report digital nucleic acid memory (dnam) for applications that require a limited amount of data to have high information density, redundancy, and copy number. in dnam, data is encoded by selecting combinations of single-stranded dna with ( ) or without ( ) docking-site domains. when self-assembled with scaffold dna, staple strands form dna origami breadboards. information encoded into the breadboards is read by monitoring the binding of fluorescent imager probes using dna-paint super-resolution microscopy. to enhance data retention, a multi-layer error correction scheme that combines fountain and bi-level parity codes is used. as a prototype, fifteen origami encoded with ‘data is in our dna!\n’ are analyzed. each origami encodes unique data-droplet, index, orientation, and error-correction information. the error-correction algorithms fully recover the message when individual docking sites, or entire origami, are missing. unlike other approaches to dna-based data storage, reading dnam does not require sequencing. as such, it offers an additional path to explore the advantages and disadvantages of dna as an emerging memory material. download pdf introduction as outlined by the semiconductor research corporation, memory materials are approaching their physical and economic limits , . motivated by the rapid growth of the global datasphere , and its environmental impacts, new non-volatile memory materials are needed. as a sustainable alternative, dna is a viable option because of its information density, significant retention time, and low energy of operation . while synthesis and sequencing cost curves drive innovations in the field, divergent approaches to nucleic acid memory (nam) have been constrained because of the ease of using sequencing to recover stored digital information , , , , , , , , . here, we report digital nucleic acid memory (dnam) as an alternative to sequencer-based dna memory. inspired by progress in dna nanotechnology , dnam uses advancements in super-resolution microscopy (srm) to access digital data stored in short oligonucleotide strands that are held together for imaging using dna origami. in dnam, non-volatile information is digitally encoded into specific combinations of single-stranded dna, commonly known as staple strands, that can form dna origami nanostructures when combined with a scaffold strand. when formed into origami, the staple strands are arranged at addressable locations (fig.  ) that define an indexed matrix of digital information. this site-specific localization of digital information is enabled by designing staple strands with nucleotides that extend from the origami. extended staple strands have two domains: the first domain forms a sequence-specific double helix with the scaffold and determines the address of the data within the origami; the second domain extends above the origami and, if present, provides a docking site for fluorescently labeled single-stranded dna imager strands. binary states are defined by the presence ( ) or absence ( ) of the data domain, which is read with a super-resolution microscopy technique called dna-points accumulation for imaging in nanoscale topography (dna-paint) . unique patterns of binary data are encoded by selecting which staple strands have, or do not have, data domains. as an integrated memory platform, data is entered into dnam when the staple strands encoding or are selected for each addressable site. the staple strands are then stored directly, or self-assembled into dna origami and stored. editing data is achieved by replacing specific strands or the entire content of a stored structure. to read the data, the origami is optically imaged below the diffraction limit of light using dna-paint (fig. s ). fig. : binary dnam overview. the test message (a) for optically reading dnam was ‘data is in our dna!’. the message was encoded and then synthesized into dnam origami. for clarity, only one of the designs is shown in (b). the data domain colors correspond to their bit values as follows: droplet (green), parity (blue), checksum (yellow), index (red), and orientation (magenta). site-specific localization is enabled by extending or not-extending the structural staple strands of the origami to create physical representations of s and s. the presence, absence, and identity of a data strand’s docking sequence defines the state of each data strand and is assessed by monitoring the binding of data imager strands via dna-paint in (c). afm images of an origami nanostructure are depicted in (d), with both the expected raft honeycomb structure (left) and data strands (right) visible. the scale bar is   nmin the afm images and the color scale ranges from –  nm in height. to ‘read’ the encoded message,  μl of the dna origami mixture, containing .  nm of each origami, was imaged via dna-paint. two representative origami cropped from the final rendered image are shown in (e), scale bar,  nm. all structures identified as origami in the rendered image were converted to a matrix of ’s and ’s corresponding to the pattern of localizations seen at each data domain in (f). the red boxes in (f) now indicate errors. the decoding algorithm performed error correction where possible in (g) and successfully retrieved the entire message when sufficient data droplets and indexes were recovered in (a). the blue boxes in (g) now indicate corrected errors. full size image key design features of dnam, that ensure error-free data recovery, are our error-correcting algorithms. detection of individual dna molecules using dna-paint is routinely limited by incomplete staple strand incorporation, defective imager strands, fluorophore bleaching, and/or background fluorescence . although it is possible to improve the signal-to-noise ratio by averaging multiple images of identical structures , this approach comes at a significant cost to the read speed and information density. to overcome these challenges, we created dnam-specific information encoding and decoding algorithms that combine fountain codes with a custom, bi-level, parity-based, and orientation-invariant error detection scheme. fountain codes enable transmission of data over noisy channels . they work by dividing a data file into smaller units called droplets and then sending the droplets at random to a receiver. droplets can be read in any order and still be decoded to recover the original file , so long as a sufficient number of droplets are sent to ensure that the entire file is received. we encode each droplet onto a single origami and add additional bits of information for error correction to ensure that individual droplets will be recovered, in the presence of high noise, from individual dna origami. together, the error-correction and fountain codes increase the probability that the message is fully recovered while reducing the number of origami that must be observed. in this report, we describe a working prototype of dnam. as a proof of concept, we encoded the message ‘data is in our dna!\n’ into origami and recovered the message using dna-paint. we divided the message into digital droplets, each encoded by a separately synthesized origami with addressable staple strands that space out data domains approximately  nm apart. a single dna-paint recording recovered the message from  fmoles of origami, with approximately origami needing to be read to reach a % probability of full data retrieval. by combining the spatial control of dna nanotechnology with our error-correction algorithms, we demonstrate dnam as an alternative approach to prototyping dna-based storage for applications that require a limited amount of data to have high information density, redundancy, and copy number. results recovery of a message encoded into dnam to test our dnam concept, we encoded the message ‘data is in our dna!\n’ into distinct dna origami nanostructures (fig.  a). each origami was designed with a unique  ×  data matrix that was generated by our encoding algorithm with data domains positioned ~  nm apart. for encoding purposes, the message was converted to binary code (ascii) and then segmented into overlapping data droplets that were each bits. inspired in part by digital encoding formats like qr-codes, the addressable sites on each origami were used to encode one of the -bit data droplets, as well as information used to ensure the recovery of each data droplet. specifically, each origami was designed to contain a -bit binary index ( – ), twenty bits for parity checks, four bits for checksums, and four bits allocated as orientation markers (fig.  b). to fully recover the encoded message, we then synthesized each origami separately and deposited an approximately equal mixture of all designs (~  fmoles of total origami) onto a glass coverslip. the data domains were accessible for binding via fluorescently labeled imager probes because they faced the bulk solution and not the coverslip (fig.  c). high-resolution atomic force microscopy (afm) was used in tapping mode to confirm the structural integrity of the origami and the presence of the data domains (fig.  d). , frames from a single field of view were recorded using dna-paint (~ origami identified in  µm ). the super-resolution images of the hybridized imager strands were then reconstructed from blinking events identified in the recording to map the positions of the data domains on each origami (fig.  e). using a custom localization processing algorithm, the signals were translated to a  ×  grid and converted back to a -bit binary string—which was passed to the decoding algorithm for error correction, droplet recovery, and message reconstruction (fig.  f, g). the process enabled successful recovery of the dnam encoded message from a single super-resolution recording. quality control of dnam we evaluated all of the origami structures using afm to confirm that the different designs were successfully synthesized, with their data domains in the correct location. automated image processing algorithms were developed to identify, orient, and average multiple images of each origami from the dna-paint recording of the mixture (fig.  ). although the edges of the origami were more sensitive to data strand insertion failures (fig. s ), the results confirmed that all of the data domains, in each of the origami designs, were detectable in each of the three separate experiments. the afm images further confirmed that the general shapes of all origami designs were as expected with properly positioned data domains (fig.  d, fig. s ). the results indicate that the extended staple strands do not prevent the synthesis of the unique origami designs. fig. : dna-paint imaging of dnam indicates all sites are recovered in a single read. dnam origami from a dna-paint recording were identified and classified by aligning and template matching them with the design matrixes (design) in which all potential docking sites are shown. filled circles indicate sites encoded ‘ ’ (dark gray) or ‘ ’ (white). colored boxes indicate the regions of the matrixes used for the droplet (green), parity (blue), checksum (yellow), index (red), and orientation (magenta). for clarity, only the first design image includes the colored matrix sites. averaged images of randomly selected origami, grouped by index, are depicted (dna-paint). scale bar,  nm. full size image further afm analysis of dnam origami as an additional quality control step, we used afm to examine origami deposited onto a glass coverslip immediately following srm imaging. we were not able to resolve individual docking sites in these images, most likely due to the increased roughness of glass, as compared to mica. however, it was possible to count the number of origami in a field of view for comparison with srm. the densities of origami estimated from the images were . and . origami/µm for afm and srm, respectively, suggesting that ~ % of the total origami deposited on glass have their data domains facing away from the coverslip and available for imager strand binding. to further investigate the variance in error rates between origami designs, we resynthesized the most error-prone origami (origami index ). dna-paint imaging indicated that the new batch showed .  ±  false-negative errors per origami, consistent with the original experiment, while the second batch showed .  ±  false-negative errors (fig.  ). this suggests that at least a portion of the variance in error rates is independent of origami design and may be caused by variations in mixing, folding, and purification conditions. fig. : all dnam data strings were recovered from a single read. (a) plots the numbers of each origami index observed in a single recording, based on template matching. the mean counts are shown as gray bars, with the percentage of the total origami indicated on the secondary axis. in (b), the mean number of total errors (top) for each structure is shown, based on template matching. the same errors are also shown after being grouped into false negatives (middle) and false positives (bottom). (c) depicts the percent of origami passed to the decoding algorithm that had both their indexes and data strings correctly identified. in (d), the percentage of each origami decoded is plotted against the mean number of errors for each structure. (e) shows histograms of the total mean numbers of errors found in origami identified by template matching (open bars) and the decoding algorithm (gray bars). the difference between the two is plotted in blue. mean values for three experiments are depicted in all graphs, error bars indicate ±sd. individual data points are plotted as small black circles. full size image data encoding/decoding strategy for dnam our encoding approach added error-correction bits of data to every origami structure so that data droplets can be determined from individual origami even when data domains are incorrectly resolved, and the entire message recovered if some droplets are missed entirely. to evaluate the performance of the decoding algorithm, we examined the frequency and types of errors in the dna-paint images and the effect of these errors on our decoding outcomes. we used a template matching strategy where each of the origami designs was considered a template, and each individual origami in the field of view was compared to these designs to find the best match. we identified the total number of origami that matched or did not match, each design (fig.  a, b). we then determined the number of each design identified by the decoding algorithm when recovering the message (fig.  c): a process independent of template matching and blind to the droplet data contained in the dna origami. we observed a clear negative correlation between the number of errors detected in a specific design and the number of corresponding origami that were successfully decoded by the algorithm (fig.  d). the results indicate that, even though there was a low relative abundance of several origami in the deposition mixture (particularly origami index ) and a mean of .  ±  . false errors per origami across the different designs, our error-correction scheme enabled successful message recovery. false positives were much less common in our experiments, with a mean of .  ±  . (fig.  b). furthermore, the mean number of errors overcome by the decoding algorithm ( .  ±  . ) was lower than the mean number of errors observed across all the origami ( .  ±  . ), demonstrating the challenge of decoding origami when several fluorescent signals are missing (fig.  e). nevertheless, the ability of our data encoding and decoding strategy to recover the message despite errors in individual origami is promising, and the results provide useful guidelines for evaluating and optimizing origami performance for future dnam designs. sampling analysis of dnam given the observed frequency of missing data points, we then used a random sampling approach to determine the number of origami needed to decode the ‘data is in our dna!\n’ message under our experimental conditions. we started with all the decoded binary output strings that were obtained from the single-field-of-view recordings and took random subsamples of – binary strings. we passed each random subsample of strings through the decoding algorithm and determined the number of droplets that were recovered (fig.  ). based on the algorithmic settings used in the experiment, we found that only ~ successfully decoded origami were needed to recover the message with near % probability. this number is largely driven by the presence of origami in our sample that were prone to high error rates and thus rarely decoded correctly (i.e., origami index ). fig. : number of dnam origami required to recover the message. the mean number of unique dnam origami correctly decoded for randomly selected subsamples of decoded binary strings are shown. the analysis was broken out by the number of errors corrected for each origami, three examples are plotted ( , , and ). black filled circles depict the mean results for nine error corrections, which is the ‘maximum allowable number of errors’ parameter used in the decoding algorithm for all other analysis reported here. the horizontal lines indicate the probability of recovering the message with different numbers of unique droplets. with fourteen or more droplets, the message should always be recovered (thick green line, and above indicates % chance of recovery) and with nine or fewer droplets the message will never be recovered (thick red line and below indicates % chance of recovery). mean values for three experiments are shown. error bars indicate ±sd. individual data points are plotted behind as smaller gray symbols. full size image simulations of dnam simulations were run to determine the size efficiency of the encoding scheme, as well as its ability to recover from errors. as shown in fig.  a, the number of origami required to encode a message of length n increases roughly at a linear rate up to n =  bytes of data. larger message sizes require more bits to be devoted to indexing, decreasing the number of available data bits per origami—creating a practical limit of  kb of data for the prototype described in this work. this limit can be increased by increasing the number of bits per origami. to determine the ability of the decoding and error correction algorithm to recover information in the presence of increasing error rates, in silico origami that encoded randomly generated data were subjected to increasing bit error rates. the decoding algorithm robustly recovers the entire message for all tested message sizes when the average number of errors per origami is less than . (fig.  b). at . errors per origami, the message recovery rate drops to . %, and as expected decreases rapidly with higher error rates ( % recovery at . errors per origami, and . % at errors per origami). an important feature of our algorithm is that the origami recovery rate can be low (as low as % in these experiments) and still recover the entire message % of the time. fig. : dnam origami and message recovery rates in the presence of increasing errors. simulations were performed to determine the theoretical success rates for correctly decoding individual dnam origami and recovering encoded messages. in (a), the mean number of dnam origami needed to successfully recover messages of increasing length with (circles) or without (squares) redundant bits are plotted. in (b), the mean success for recovering both individual origami (triangles) and the entire message (diamonds) are plotted against the mean number of errors per origami (errors were randomly generated for simulated data). simulation recovery rates are averages of all message sizes tested ( to , bits). for comparison, the mean success rate for experimental data is also plotted (open circles). for experimental data, the mean success was estimated by comparing the decode algorithm’s results with that of the template-matching algorithm. all simulations were repeated times. experimental data were derived from independent dna-paint recordings. full size image discussion our results demonstrate a proof of concept for writing and reading digital information encoded in oligonucleotides. because of the durability of dna, dnam has long-term future potential for archival information storage. currently, the most widely used material for this purpose is magnetic tape. recent advancements in tape report a two-dimensional areal information density up to gbit/cm , though the current commercially available material typically has lower density . although relevant only for reading throughput, not storage, the information density of tape can be compared to the dnam origami, which contains data domains spaced at  nm intervals to achieve an areal density of about gbit/cm . after accounting for using ~ / of the bits for indexing and error correction, this results in an areal data density of gbit/cm . it is possible to increase dnam areal density by placing a data domain at every turn in the dna helix (~ .  nm spacing), a distance that has been resolved by srm . other avenues to increasing density are also available, such as previously reported multiplexing techniques with multiple fluorophores and orthogonal binding sequences with different binding kinetics , and incorporation of each of these approaches is expected to impact reading throughput. in terms of durability, typical magnetic tape lasts for – years, while double-stranded dna is estimated to be stable for millions of years under optimal environmental conditions . with our optical microscope setup and origami deposition protocol, we can image the unique origami designs needed to store  kb of data (fig.  ), albeit in several recordings. we conservatively estimate it would take ~ recordings to ensure a % probability of successful data recovery given our current error rates. to efficiently handle larger datasets, it is necessary to improve the data capacity of individual origami, which will allow a larger range of indexing values and increase the proportion of bits dedicated to the data as compared to indexing, error-correction, and orientation. this could be achieved by engineering larger origami or by increasing data density—either by placing data sites closer together or by using multiplexing techniques to augment bit depth at each site (see si, supplemental calculations). our results also indicate that advancements in origami-based information storage and reading will require a coordinated effort between improvements in origami synthesis, substrate deposition, dna-paint, and coding algorithms. for example, our subsampling approach (fig.  ) showed that a decoding algorithm that corrected up to nine errors easily recovered our entire message, while algorithms that corrected only five or fewer errors are much less computationally expensive but rarely recovered our full message. this makes sense, given that most of the origami detected had more than five errors (fig.  e). we anticipate that reducing the number of errors by improving origami design and optimizing imager strand performance would allow more efficient algorithms for data recovery, which would, in turn, decrease the number of bits dedicated to error correction and thus increase information density. our fountain code algorithm is robust to randomly lost packets of information, as long as the receiver receives k + ε packets, where k is the minimum number of packets required to encode the file under perfect conditions (i.e., k is equal to the file size) and ε is the number of additional packets received. the probability of being able to decode the file is then ( −δ), where δ is upper-bounded by −kε . this equation implies that all things being equal, the larger the file size the greater the likelihood of successfully recovering the file at the receiver. normally, the transmitter continues to transmit droplets in a fountain code until the receiver acknowledges successful file recovery. in the case of dnam, this is not possible since the number of droplets must be fixed ahead of time to equal the number of origami. reducing the error rates, or improving error correction/detection, would have the added benefit of reducing the number of droplets and hence origami discarded by the fountain code. these improvements would make it easier to determine the minimum number of droplets per dna origami needed to ensure robust file recovery while increasing information density even further. the lower abundance and higher error rate of origami index (fig.  ) indicate that some designs have defects that we could not detect by afm and/or srm. careful defect analysis indicates that incorporated but inactive data domains play a greater role in producing errors than unincorporated staple strands . future dnam research should focus on sequence optimization to minimize variation in hybridization rates and the formation of off-target structures . it should also include the use of larger dna origami and increased bit depth through multiplexing. future work on dnam will also need to address scalability if dnam is to compete with established memory storage systems. in this report, we describe the storage of a small amount of data in order to illustrate the potential of dnam. scaling to much larger data sets requires substantial engineering improvements in both write and read speeds (see fig. s and supplemental calculations for further comparisons). for writing, the rate-limiting step is the selection of the oligonucleotide data strands. in our lab, we use an epmotion liquid-handling system to pipette oligonucleotides. while this machine could handle thousands of sample transfers per day, it limits the write speed to thousands of bits per day as each data strand encodes bit. as far as we are aware, the fastest liquid-transfer system available is the echo ® liquid handler, which is reported by the manufacturer to process ~ , samples per day, allowing ~ .  mb per day for -bit data strands. for dnam to reach write speeds equivalent to tape (hundreds of mb per second) using laboratory hardware, significant increases in either the number of bits per strand and the rate of transfer of samples or the rate at which dna oligonucleotides can be synthesized will be necessary. while writing information into dna at a competitive rate is a sincere challenge that is facing the entire dna-memory field , and is likely to undergo rapid innovation as the market for synthesized dna increases, the approach we have used here, in which a library of premade oligonucleotides are drawn on, is currently the fastest approach for dnam. due to the inherently parallel nature of dna-paint imaging, the read speed of dnam is arguably less of a challenge to scale up to deal with large amounts of data. the rate-limiting factors for dna-paint are the camera integration time needed to collect sufficient photons to resolve an emitter and the number of emitters that can be identified in a single frame of a recording. the latest report on dna-paint by strauss and jungmann describes a -fold speed-up in data collection for origami very similar to those we imaged in dnam . in their experiments,  nm resolution of the binding site was demonstrated with  ms camera integration times. another recent innovation, using deep learning to rapidly identify the centroids of overlapping emitter blink events (deep-storm ), has been shown to be able to process dense srm data (~ emitters/µm ). taken together we estimate that by using densely-deposited dnam origami with data strands placed  nm apart, an emccd camera with a  ×  imaging array, the deep-storm algorithm, and straus and jungmann’s -fold speed-up methodology, we could currently collect data at a rate of ~  mb per day (see si, supplemental calculations). further improvements in reading speed could be achieved by increasing the imaging array area—via larger sensors or multiple cameras and using multicolored probes or three-dimensional information to collect multiple bits worth of data simultaneously from one site. our hope is that this dnam prototype will motivate this work and more. dna is an emerging material for data storage due to its high information density, high durability, low energy of operation, and the declining costs of synthesis . the traditional approach in the field is to design and synthesize unique oligonucleotides that encode data directly into their sequence. this data is recovered by reading the pool of oligonucleotides using sequencing. in contrast, dnam takes advantage of another property of dna—its programmability. by encoding binary data into dna origami and reading it as spatially and temporally distinct hybridization events, dnam decouples information recovery from sequencing. editing the data is trivial through the inclusion or exclusion of sequence extensions from a library of staple strands. data strands can be stored directly or incorporated into origami and then stored; separating the d storage density from the d reading density. in addition, dnam is a massively parallel process because the large optical field of view affords tens of thousands of origami to be imaged simultaneously, and the number of optical read heads is proportional to the concentration of the imager strands in solution. rather than averaging thousands of dna-paint images together to resolve the digital data , individual origami were read here using custom encoding, decoding, and error-correction algorithms. our algorithms combined fountain codes with bi-level parity codes to significantly enhance our data retention—creating a multi-layer error correction scheme that encoded index, orientation, parity, and checksum bits into the origami. as a proof of concept, several bytes of data were recovered in a single dna-paint recording. even when the dna origami recovery rate was poor (as low as %), the message was recovered % of the time. as an alternative platform for testing dna-memory technology, dnam offers a pathway to explore the advantages and disadvantages of dna as a material for information storage and encryption, as previously demonstrated by zhang et al. . because of the scaling challenges of using dna as a memory material, this is particularly true for applications like barcoding that require a limited amount of data to have high information density, redundancy, and copy number. methods the materials purchased for this study, and their respective vendors, are outlined in table  . all other reagents were obtained from sigma. table materials.full size table buffers as previously described , two buffers were used to prepare and image dna origami: a deposition buffer and an imaging buffer. the deposition buffer contained . × tbe and  mm mgcl . the imaging buffer contained the deposition buffer with the supplement of  nm pcd,  mm trolox,  nm imager strands, and  mm pca. pca was added to the imaging buffer immediately before the start of a dna-paint recording. encoding algorithm the encoding algorithm used a multi-layer error correction scheme to encode message data bits along with the index, orientation, and error correction bits onto multiple origami (fig. s ). at the message level, the algorithm used a fountain code to encode the data. let m be a message string composed of a sequence of n bits. the fountain code algorithm first divides m into k equally sized and non-overlapping substrings s , s , …, sk, where the concatenation s s …sk = m, and then systematically combines one to many segments using the binary xor operation to form multiple data blocks called droplets. the number of segments d used to form each droplet are typically drawn from a distribution based on the soliton distribution: $$p\left( \right) = /k$$ ( ) the soliton distribution ensures that the algorithm encodes the optimal number of single-segment droplets necessary for the decode step. once the number of segments d for a droplet is determined, the droplet is formed by xor’ing d randomly selected, unique segments from m, with each segment being selected with probability /k. for our experiments, we divided the message ‘data is in our dna!\n’ into segments of bits each. the segments were then combined via an xor in different combinations using the fountain code algorithm to form the droplets. while the theoretical minimum number of -bit droplets required to decode the message is , the redundancy provided by the additional droplets ensured that the message would be recoverable in all cases involving the loss of one droplet, and in some cases with the loss of up to five droplets (fig.  ). after generating the droplets using fountain codes, the encoding algorithm encoded each droplet onto fifteen  ×  matrixes, and sequentially added index and orientation marker bits, computed and added checksum bits, and then added parity bits (fig.  b). these matrixes were used to construct origami structures, with a one-to-one mapping between the matrixes and the origami’s data domains. figure  b shows the layout of how droplet information was encoded onto each origami, composed of bits of droplet data (green coloring in fig.  b), four indexing bits (red), four orientation bits (magenta), four checksum bits (yellow), and twenty parity bits (blue). it is important to note that the layout of the data, orientation, and index bits relative to the corresponding parity and checksum bits is invariant to rotation, which made it possible for the error correction algorithm to perform error detection and recovery before determining the orientation (fig. s ). this led to more robust data recovery. dna origami folding rectangular dna origami structures (~  ×   nm) were designed based on previous work by rafat et al. with potential docking strand sites arranged in a  ×  matrix with  nm spacing. then, using the protocol described by schnitzbauer et al. a mixture of extended and unmodified staple strands (si tables s and s ) were selected to fold the m scaffold into the designed shape, with extended strands located at the ‘ ’ positions described in the design matrix (si table s ). as described in the introduction, an extended staple strand has a binding site for the m imager strand, unmodified strands bind solely to the scaffold dna to induce folding. using this method, origami designs were created that matched the matrixes output by the encoding algorithm. we assembled individual origami designs by combining  nm m mp with × unmodified stands, × extended strands, × tae and  mm mgcl (in nuclease-free water;  µl total volume) and folding in a mastercycler nexus thermal cycler (eppendorf) using the following heating cycle: [  min  °c,  min  °c, then from  °c to  °c over  h]. we purified the origami by running them on an ice-cooled . % agarose gel containing . × tbe and  mm mgcl , excising the single sharp band, and collecting the exudate of the crushed gel piece. sharp triangle origami used as fiducial markers were prepared similarly, as previously described (see s table s for oligonucleotide sequences). all purified origami were stored in the dark at  °c until use. glass coverslip preparation borosilicate glass coverslips (  ×  and  ×   mm, # gold seal coverglass) were sonicated in . % (v/v) liquinox and nano-pure water (  min in each) to remove contaminants and dried at  °c for at least  min. fiducial markers (  µl of .  pm aunps) were deposited onto the coverslips for  min at room temperature. the labeled coverslips were rinsed with methanol and nano-pure water and stored at  °c prior to use. dna origami deposition onto coverslips the glow discharge technique previously described by green was used to deposit dna origami onto glass coverslips using an air-plasma vacuum glow-discharge system. briefly, coverslips that had been cleaned and labeled with fiducial markers were exposed to glow discharge generated using an electrode coupled v electro-technic bd- a high-frequency generator under torr of vacuum for  s. for dna-paint analysis, a sticky-slide flow cell (~  µl channel volume) was glued to the coverslip, dna origami were then deposited by introducing  µl of .  nm origami (a mixture of dnam origami and sharp triangle origami added as additional fiducial markers, in deposition buffer) into the flow chamber and incubated for  min at room temperature. after deposition, the flow chamber was rinsed with ml of deposition buffer (no dna origami) and refilled with imaging buffer. when performing afm measurements on samples previously used for dna-paint, a custom fluid chamber, modified from jungmann et al. , was used. a  ×   mm coverslip was glued to a microscope slide using double-sided sticky tape with the addition of a thin layer of gel sealant—to both seal any gaps and weaken the binding of tape to the glass. once dna-paint imaging had been performed the sealant allowed the coverslip to be easily removed for further afm analysis. fluorescence microscopy dna origami was imaged below the diffraction limit of light via dna-paint using an inverted nikon eclipse ti microscope from nikon instruments in total internal reflectance fluorescence (tirf) mode. the images were acquired using: an optical feedback focal-drift correction system developed in-house or the perfect focus system from nikon instruments; an oil-immersion cfi apochromat × tirf objective with a . numerical aperture, plus an extra × . magnification from nikon instruments; and a / / /  nm laser quad band set tirf filter cube from chroma. a  nm laser source excited fluorescence from the dna-paint imager strands within an evanescent field extending a few hundred nanometers above the surface of the glass coverslip. the emitted fluorescence was imaged onto the full chip with  ×  pixels ( pixel =   μm) using a proem emccd camera from princeton instruments at a  ms exposure time (~ frames/s). during an experimental recording, each of the individual data strands, within a dnam origami’s matrix, transiently and repeatedly bound an imager strand, which emits a signal, creating a series of blinks. images with blinking events were recorded into a stack (typically , frames per recording) using nikon nis-elements version . . (nikon instruments) or lightfield version (princeton instruments) prior to processing and analysis. dna-paint fluorophore localization after recording a dna-paint stack, the center position of signals (localizations) emitted by imager probes, transiently binding to dna origami docking strands, were identified using the imagej thunderstorm plugin . the localizations were rendered and then drift corrected using the picasso-render software package, as described by schnitzbauer et al. . data visualization and peak fitting of image data for psf analysis were performed using originpro version b (originlab). localization data processing a custom algorithm was developed for identifying clusters of localizations, determining the maximum likelihood position of the emitters, and generating binary matrix data. the algorithm selected localization clusters at random from the localization list. to do this, it sampled random points in the list, determined the average position of nearby localizations, and counted the localizations within a radius (r) and the localizations within a band r < r <  r. the algorithm accepted clusters if the counts in the inner circle were greater than a threshold and the counts in the outer band were less than % of the counts in the inner band. this ensured selection of bright clusters that were isolated from other clusters. the algorithm then fits the cluster localizations to a grid of emitters. an idealized grid was created using the average dna-paint image produced by several thousand individual origami structures of the same architecture used in this work. the algorithm performed fitting using a maximum likelihood estimation for the likelihood function: $$l\left( {i,x_c,y_c,\theta ,{\mathrm{{\delta}}}x_g^ ,b} \right) = \mathop {\prod }\limits_i \left( {\mathop {\sum }\limits_k \frac{{i_k}}{a}\exp \left( { - \frac{{\left( {x_i - x_k\left( {x_c,y_c,\theta } \right)} \right)^ + \left( {y_i - y_k\left( {x_c,y_c,\theta } \right)} \right)^ }}{{{\mathrm{{\delta}}}x_i^ + {\mathrm{{\delta}}}x_g^ }}} \right)} \right) \ast \frac{b}{a} \ast p(n,i,b)$$ ( ) where ik is the intensity of the kth emitter, (xc, yc) is the center position of the grid, θ is the rotation angle of the grid, Δxg is the global lateral uncertainty caused by an error in drift correction, b is the background, Δxi is the lateral position uncertainty of localization i reported by the thunderstorm analysis described above, (xi, yi) is the position of the ith localization, (xk, yk) is the position of the kth emitter, as a function of the center position and rotation of the grid, a is the area of the cluster, and n is the number of localizations found in the cluster. a is a normalization constant given by: $$a = \pi \left( {{\mathrm{{\delta}}}x_i^ + {\mathrm{{\delta}}}x_g^ } \right)$$ ( ) p(n,i,b) is the probability of finding n localizations given the intensity of each grid point and the background intensity, determined from the poisson distribution of mean value n. this likelihood function determines the probability of finding localizations at all of the observed sites given a set of point emitters at the grid sites with intensity ik and background intensity b. the optimization utilized the l-bfgs-b method of the minimize function provided by scipy to minimize −log(l) subject to the constraint that all intensities are positive. signals that did not align to the  ×  grid were filtered to minimize fragmented origami and to reduce inadvertent assimilation of the triangular origami fiducial markers into the results. the algorithm then assigned the emitters a binary value ( or ) using an empirically derived threshold value. this binary matrix data was decoded using the decoding algorithm described below. in parallel with this blind cluster analysis, the processing algorithm also carried out a template matching step to more reliably identify individual origami and analyze their errors. this additional step used the known origami designs as templates, matching the observed origami to the best fit, based on the total number of errors. this method was more robust to higher error rates than the blind cluster analysis and allowed more origami to be identified for image averaging and error analysis (fig.  ). it should be noted, however, that the template matching method cannot be considered as a data reading method because it requires a priori knowledge of the data being analyzed. for this reason, none of the analysis of the recovery rates or data density discussed here used data obtained from pattern matching. decoding algorithm the decoding algorithm (fig. s ) utilized a multi-layer error correction/encoding scheme to recover the data in the presence of errors. the algorithm first works at the dnam origami level (step , below), using the parity and checksum bits, to attempt to identify and correct errors and recover the correct matrix. after recovery, the algorithm uses binary operations to recover the original data segments from the droplets (step , below). decoding algorithm: step –error correction given raw binary matrix data m for a single dnam origami, the output from the localization data processing step, the matrix decoding algorithm determined which, if any, bits were associated with checksum and parity errors by calculating the bi-level matrix parity and checksum values, as described in fig. s . any discrepancies between the calculated parity and checksum values and the values recovered from the origami were noted, and a weight for each of the bits associated with the errant parity/checksum calculation was deduced. if no parity/checksum errors were detected for a particular matrix, then the data was assumed to be accurate, and the algorithm proceeded to extract the message data. to determine the site(s) of likely errors, the decoding algorithm first determined a weight for every cell in m, beginning with data cells (the cells containing droplet, index, or orientation bits) and proceeding to parity and checksum cells. let \(p_{c_{ij}}\) be the set of parity functions calculated over a given data cell cij. then for each data cell cij: $$x_{ij} = \mathop {\sum }\limits_{f_{c_{pq}} \in p_{c_{ij}}} \left| {c_{pq} - f_{c_{pq}}\left( {\mathbf{m}} \right)} \right|$$ ( ) where cpq is the parity cell where the expected binary value of f is stored. the weight for each parity cell cij was then calculated based on the number of non-zero weights greater than for the data cells associated with it. more formally, let cij be a parity cell and \(d_{c_{ij}}\) be the set of data cells used in the calculation of cij. then the weight xij for each parity cell cij is: $$x_{ij} = \mathop {\sum }\limits_{c_{pq} \in d_{c_{ij} \wedge x_{pq} > }} {\mathop{\rm{sgn}}} \left( {x_{pq}} \right)$$ ( ) the higher the weight value, the higher the probability that the corresponding cell had an error. an overall score for the matrix was then calculated by summing over all xij and normalizing by the sum of the correctly matched parity bits. this value was designated as the overall weight of the matrix. higher values of this weight correspond to matrixes with more errors. $${\mathrm{overall}}\,{\mathrm{matrix}}\,{\mathrm{weight}} = \frac{{\mathop {\sum }\nolimits_{i = }^ \mathop {\sum }\nolimits_{j = }^ x_{ij}}}{{\# {\mathrm{number}}\,{\mathrm{of}}\,{\mathrm{matched}}\,{\mathrm{parity}}\,{\mathrm{bits}}}}$$ ( ) the algorithm then performed a greedy search to correct the errors using a priority queue ordered by the overall matrix weight (fig. s ). the algorithm began by iteratively altering each of the probable site errors and computing the overall matrix weight of the modified matrix for each, placing each potential bit flip into a priority queue where the flips that produced the lowest overall weights had the highest priority. at each step, the algorithm selected the bit flip associated with the highest priority in the queue and then repeated this process on the resulting matrix. this process was continued until the algorithm produced a matrix with no mismatches or until it reached the maximum number of allowed bit flips ( for our simulation/experiment). if it reached the maximum number of flips, it returned to the queue to pursue the next highest priority path. if the algorithm found a matrix with no mismatches, it then checked the orientation bits and oriented the matrix accordingly. the droplet and index data were then extracted and passed to the next step. if the queue was emptied without finding a correct matrix, the algorithm terminated in failure. decoding algorithm: step –fountain code decoding after extracting the droplet and index data from multiple origami the algorithm attempted to recover the full message (fig. s ). once decoded, each droplet had one or multiple segments xored in it. using the recovered indexes the algorithm determined how many and which segments were contained in each droplet. to decode the message, the algorithm maintained a priority queue of droplets based on the number of segments they contained (their degree), with the lowest degree droplets having the highest priority. the algorithm looped through the queue, removing the lowest degree droplet, attempting to use it to reduce the degree of the remaining droplets using xor operations, and re-queuing the resulting droplets. upon finding a droplet of ‘degree one’ it stored it as a segment for the final message. if all segments were recovered, the algorithm terminated successfully. data simulation test to test the robustness of our encoding and decoding algorithms, origami data were simulated with randomly generated messages and errors. first, random binary messages of size m were created (for m =  to , bits, at -bit intervals). these messages were then divided into m/b equally sized segments, where b is the number of data bits to be encoded onto an individual origami. for fixed-size origami, larger messages necessitated a smaller b, as more bits had to be dedicated to the index. in these cases, b varied between eight (for m =  , ) and twelve (for m =  ). after determining message segments, droplets were formed using the fountain code algorithm and encoded onto origami, along with the corresponding index, orientation, and error-correcting bits. ten in silico copies of each unique origami were created, and – bits flipped at random to introduce errors. the origami was decoded as described above. reporting summary further information on research design is available in the nature research reporting summary linked to this article. code availability dna-paint images were analyzed using custom and publicly available codes (as indicated). the encoding/decoding algorithms were written in-house using python, version . . . the source codes for the encoding, decoding, and localization algorithms are available on github at https://github.com/boisestate/nam. the schematic in fig.  c of digital nucleic acid memory was derived from a model created using nanodesign (www.autodeskresearch.com/projects/nanodesign). data availability the original dna-paint recordings and drift-corrected centroid localization data that support the findings of this study have been deposited in the zenodo repository with the identifier “https://doi.org/ . /zenodo. â€�. source data are provided with this paper. any other relevant data are available from the authors upon reasonable request. source data are provided with this paper. references .victor, z. semiconductor synthetic biology roadmap. – https://doi.org/ . /rg. . . . . ( ). .itrs. international technology roadmap for semiconductors, results. itrpv vol. – https://www.semiconductors.org/wp-content/uploads/ / / _ -itrs- . -executive-report- .pdf. accessed st march . ( ). .reinsel, d., gantz, j. & rydning, j. the digitization of the world-from edge to core. idc white paper us https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf. accessed st march . ( ). .zhirnov, v., zadegan, r. m., sandhu, g. s., church, g. m. & hughes, w. l. nucleic acid memory. nat. mater. , – ( ). ads  cas  article  google scholar  .organick, l. et al. random access in large-scale dna data storage. nat. biotechnol. , – ( ). cas  article  google scholar  .goldman, n. et al. towards practical, high-capacity, low-maintenance information storage in synthesized dna. nature , – ( ). ads  cas  article  google scholar  .grass, r. n., heckel, r., puddu, m., paunescu, d. & stark, w. j. robust chemical preservation of digital information on dna in silica with error-correcting codes. angew. chem. int. ed. , – ( ). cas  article  google scholar  .bornholt, j. et al. a dna-based archival storage system. acm sigarch comput. archit. n. , – ( ). article  google scholar  .shipman, s. l., nivala, j., macklis, j. d. & church, g. m. molecular recordings by directed crispr spacer acquisition. science. , aaf - –aaf -  ( ). article  google scholar  .erlich, y. & zielinski, d. dna fountain enables a robust and efficient storage architecture. science , – ( ). ads  cas  article  google scholar  .blawat, m. et al. forward error correction for dna data storage. procedia comput. sci. , – ( ). article  google scholar  .yazdi, s. m. h. t., gabrys, r. & milenkovic, o. portable and error-free dna-based data storage. sci. rep. , – ( ). article  google scholar  .lee, h., kalhor, r., goela, n., bolot, j. & church, g. enzymatic dna synthesis for digital information storage. biorxiv , https://doi.org/ . / . ( ). .wang, p., meyer, t. a., pan, v., dutta, p. k. & ke, y. the beauty and utility of dna origami. chem , – ( ). cas  article  google scholar  .nieves, d. j., gaus, k. & baker, m. a. b. dna-based super-resolution microscopy: dna-paint. genes , – ( ). article  google scholar  .jungmann, r. et al. single-molecule kinetics and super-resolution microscopy by fluorescence imaging of transient binding on dna origami. nano lett. , – ( ). ads  cas  article  google scholar  .schnitzbauer, j., strauss, m. t., schlichthaerle, t., schueder, f. & jungmann, r. super-resolution microscopy with dna-paint. nat. protoc. , – ( ). cas  article  google scholar  .luby, m. lt codes. in proceedings of the rd annual ieee symposium on foundations of computer science (ieee, ) – (ieee, ). .mackay, d. j. c. fountain codes. iee proc. commun. , – ( ). article  google scholar  .greengard, s. the future of data storage. commun. acm , – ( ). article  google scholar  .gwosch, k. c. et al. minflux nanoscopy delivers d multicolor nanometer resolution in cells. nat. methods , – ( ). .wade, o. k. et al. -color super-resolution imaging by engineering dna-paint blinking kinetics. nano lett. , – ( ). ads  cas  article  google scholar  .langari, s. m. m., yousefi, s. & jabbehdari, s. fountain-code aided file transfer in vehicular delay tolerant networks. adv. electr. comput. eng. , – ( ). article  google scholar  .green, c.m. nanoscale optical and correlative microscopies for quantitative characterization of dna nanostructures. https://doi.org/ . /td/ /boisestate (boise state university theses and dissertations, ). .hata, h., kitajima, t. & suyama, a. influence of thermodynamically unfavorable secondary structures on dna hybridization kinetics. nucleic acids res. , – ( ). cas  article  google scholar  .strauss, s. & jungmann, r. up to -fold speed-up and multiplexing in optimized dna-paint. nat. methods , – ( ). cas  article  google scholar  .nehme, e., weiss, l. e., michaeli, t. & shechtman, y. deep-storm: super-resolution single-molecule microscopy by deep learning. optica , ( ). ads  cas  article  google scholar  .takabayashi, s. et al. boron-implanted silicon substrates for physical adsorption of dna origami. int. j. mol. sci. , ( ). .zhang, y. et al. dna origami cryptography for secure communication. nat. commun. , ( ). ads  article  google scholar  .aghebat rafat, a., pirzer, t., scheible, m. b., kostina, a. & simmel, f. c. surface-assisted large-scale ordering of dna origami tiles. angew. chem. int. ed. , – ( ). cas  article  google scholar  .rothemund, p. w. k. folding dna to create nanoscale shapes and patterns. nature , – ( ). ads  cas  article  google scholar  .dai, m., jungmann, r. & yin, p. optical imaging of individual biomolecules in densely packed clusters. nat. nanotechnol. , – ( ). ads  cas  article  google scholar  .ovesný, m., křížek, p., borkovec, j., Å vindrych, z. & hagen, g. m. thunderstorm: a comprehensive imagej plug-in for palm and storm data analysis and super-resolution imaging. bioinformatics , – ( ). article  google scholar  .oliphant, t. e. python for scientific computing. comput. sci. eng. , – ( ). cas  article  google scholar  download references acknowledgements this research was funded in part by the national science foundation (eccs ), the semiconductor research corporation, and the state of idaho through the idaho global entrepreneurial mission and higher education research council. author information author notes christopher m. green present address: center for bio/molecular science and engineering, u.s. naval research laboratory, washington, dc, usa reza zadegan present address: department of nanoengineering, joint school of nanoscience and nanoengineering, north carolina a&t state university, greensboro, nc, usa these authors contributed equally: george d. dickinson, golam md mortuza, william clay, luca piantanida. affiliations micron school of materials science and engineering, boise state university, boise, id, usa george d. dickinson, william clay, luca piantanida, christopher m. green, chad watson, elton graugnard, reza zadegan & william l. hughes department of computer science, boise state university, boise, id, usa golam md mortuza & tim andersen department of biological sciences, boise state university, boise, id, usa eric j. hayden department of electrical and computer engineering, boise state university, boise, id, usa wan kuang authors george d. dickinsonview author publications you can also search for this author in pubmed google scholar golam md mortuzaview author publications you can also search for this author in pubmed google scholar william clayview author publications you can also search for this author in pubmed google scholar luca piantanidaview author publications you can also search for this author in pubmed google scholar christopher m. greenview author publications you can also search for this author in pubmed google scholar chad watsonview author publications you can also search for this author in pubmed google scholar eric j. haydenview author publications you can also search for this author in pubmed google scholar tim andersenview author publications you can also search for this author in pubmed google scholar wan kuangview author publications you can also search for this author in pubmed google scholar elton graugnardview author publications you can also search for this author in pubmed google scholar reza zadeganview author publications you can also search for this author in pubmed google scholar william l. hughesview author publications you can also search for this author in pubmed google scholar contributions w.l.h. conceived the concept. e.j.h., t.a., w.k., e.g., r.z., and w.l.h. designed the study. c.w., e.j.h., t.a., w.k., e.g., and w.l.h. supervised the work. c.w. managed the research project. g.d.d. and l.p. synthesized the dna origami and performed dna-paint imaging. l.p. carried out afm imaging and analysis. t.a. and g.m.m. developed the encoding-decoding algorithms and necessary software, performed data processing, and generated the simulations. g.d.d. and w.c. developed the image-analysis software and analyzed the dna-paint recordings. c.m.g. performed preliminary experiments and contributed critical suggestions to experimental design. all authors prepared the manuscript. corresponding author correspondence to william l. hughes. ethics declarations competing interests the authors declare no competing interests. additional information peer review information nature communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. peer reviewer reports are available. publisher’s note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. supplementary information supplementary information peer review file reporting summary source data source data rights and permissions open access this article is licensed under a creative commons attribution . international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. the images or other third party material in this article are included in the article’s creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the article’s creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this license, visit http://creativecommons.org/licenses/by/ . /. reprints and permissions about this article cite this article dickinson, g.d., mortuza, g.m., clay, w. et al. an alternative approach to nucleic acid memory. nat commun , ( ). https://doi.org/ . /s - - -y download citation received: february accepted: march published: april doi: https://doi.org/ . /s - - -y comments by submitting a comment you agree to abide by our terms and community guidelines. if you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. download pdf advertisement explore content research articles reviews & analysis news & comment videos collections subjects follow us on facebook follow us on twitter sign up for alerts rss feed journal information about the journal editors' highlights contact top articles editorial policies publish with us for authors for reviewers submit manuscript search search articles by subject, keyword or author show results from all journals this journal search advanced search quick links explore articles by subject find a job guide to authors editorial policies nature communications (nat commun) issn - (online) nature.com sitemap about us press releases press office contact us discover content journals a-z articles by subject nano protocol exchange nature index publishing policies nature portfolio policies open access author & researcher services reprints & permissions research data language editing scientific editing nature masterclasses nature research academies libraries & institutions librarian service & tools librarian portal open research recommend to library advertising & partnerships advertising partnerships & services media kits branded content career development nature careers nature conferences nature events regional websites nature africa nature china nature india nature italy nature japan nature korea nature middle east legal & privacy privacy policy use of cookies manage cookies/do not sell my data legal notice accessibility statement terms & conditions california privacy statement © springer nature limited close sign up for the nature briefing newsletter — what matters in science, free to your inbox daily. email address sign up i agree my information will be processed in accordance with the nature and springer nature limited privacy policy. close get the most important science stories of the day, free in your inbox. sign up for nature briefing news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith th july th july - by david gerard - comments. get your signed copies of attack of the foot blockchain and libra shrugged, for £ the pair plus postage — £ uk, £ europe, £ rest of world. see the post for details. you can support my work by signing up for the patreon — $ or $ a month to support this world of delights! [patreon] and remind your friends and colleagues to sign up to get my newsletter by email. [scroll down, or click here]     el salvador calls cardano el faro got hold of a presentation from the cardano foundation and whizgrid — a white-label crypto exchange provider — to president bukele’s bitcoin team on june , the day before bukele announced el salvador’s embrace of bitcoin. the zoom videos leaked last week revealed the plan to release a new electronic colón-dollar. this fresh leak outlines how to introduce the colón-dollar, pay welfare subsidies with the system, and use qr codes for daily transactions in shops. that is: they presented a completely generic electronic payment system, that doesn’t benefit from cryptocurrency in any way. (reader tv comments: “the first slide in that presentation is so generic and slapped together that they neglected to replace the euro sign with a dollar sign.”) the story is in spanish, but the leaked slides are in english. the presentation is nine slides at the bottom of the page on el faro — you need to swipe sideways to go to the next slide. [el faro] “cryptoassets as national currency? a step too far” by tobias adrian and rhoda weeks-brown, imf. it’s clear from how adrian and weeks-brown talk about “cryptoassets,” and that the one they name is “bitcoin,” that this is about el salvador and nothing else. this is a perfect response from the people who know about these things, and adrian in particular — who wrote a paper in may that pretty closely anticipated facebook’s june plan for libra. i don’t expect president bukele to pay very much attention. [imf] everybody still hates binance how cryptocurrency exchanges work: withdrawals are temporarily unavailable due to unscheduled scheduled maintenance. funds are absolutely safe. scurrilous rumours that we have been hacked, or “hacked,” are entirely untrue, and we will sue anyone spreading such. to file as an unsecured creditor, please contact our receivers, grabbit, skimm & runne accountants, at their mail boxes etc in the caymans. good news for binance — the securities commission of malaysia officially recognises binance! and cz personally! the securities commission has announced actions against binance for “illegally operating a digital asset exchange (dax)” — binance is not registered in malaysia. [press release] the action is against binance holdings limited (cayman islands), binance digital limited (uk), binance uab (lithuania), binance asia services pte ltd (singapore), and changpeng zhao himself. binance must disable binance.com and its mobile app in malaysia by august, stop all marketing and email to malaysians, and restrict malaysians from binance’s telegram group. cz is personally ordered to make sure these all happen. binance was included on the securities commission’s investor alert list in july . the list also contains a pile of “potential clone entities” using “binance” in their names. [sc, archive] the securities commission finishes: “those who currently have accounts with binance are strongly urged to immediately cease trading through its platforms and to withdraw all their investments immediately.” couldn’t put it better myself. binance is stopping margin trading against euros, pounds or australian dollars from august, and has stopped all margin trading in any currency in germany, italy and the netherlands. [reuters; reuters] binance is totally not insolvent! they just won’t give anyone their cryptos back because they’re being super-compliant. kyc/aml laws are very important to binance, especially if you want to get your money back after suspicious activity on your account — such as pressing the “withdraw” button. please send more kyc. [binance] cz is looking for a patsy to stand up front while he keeps collecting the money — a replacement ceo for binance, with a “strong regulatory background,” so cz can “contribute to binance and the bnb ecosystem. i don’t have to be ceo to do that.” [coindesk] the long blockchain of bitcoin etfs goldman sachs proposes a “defi and blockchain equity” exchange-traded fund! [sec filing] the goldman sachs innovate defi and blockchain equity etf (the “fund”) seeks to provide investment results that closely correspond, before fees and expenses, to the performance of the solactive decentralized finance and blockchain index (the “index”). solactive doesn’t have an index of that name. solactive told cryptonews that goldman would be using their “solactive blockchain technology performance index” — which is a list of global tech companies that have, at most, put out a blockchain-themed press release. no riot blockchain, no microstrategy, no coinbase — instead, we have nokia (who?) and accenture. when a client thinks they want “blockchain” exposure, the advisor could just recommend this. [cryptonews] solactive later told cryptonews that they were putting together a new index for goldman’s etf. though the sec filling says that the etf is “not sponsored, promoted, sold or supported in any other manner by solactive ag.” we’ll see what the eventual list looks like. regulatory clarity the new jersey cease and desist order against blockfi offering its blockfi interest accounts has been delayed, and will not go into effect before september . [twitter] vermont’s department of financial regulation has also asked blockfi to show cause within days why the company should not be issued a cease and desist on its blockfi interest accounts. vermont asked blockfi about this back in january , and considered blockfi’s response “unsatisfactory.” [coindesk] south korea changes its tax rules to make it easier to seize crypto to pay back taxes. [reuters] the new australian securities and investments commission chair, joseph longo, warns about unregulated cryptocurrency trading “and other economic threats to pandemic recovery.” he means investment scams. [abc] i fought the law for tether’s criminal investigation by the department of justice, and the coincidental spike in the price of bitcoin, see yesterday’s post. coiners’ continuing assertions that anyone should assume good faith in bitfinex or tether when they’re known bad actors in extensively documented, and indeed legally binding, detail is upton sinclair’s law — “it is difficult to get a man to understand something, when his salary depends on his not understanding it” — on pcp. here’s me on a turkish crypto blog talking about tether and usdc. google translate gives a reasonable rendition of my original answers. [pa sosyal] virgil griffith is the ethereum foundation developer who evangelised ethereum to north korea and got himself arrested in . griffith was out on bail, but violated his bail conditions by trying to access his coinbase account — he got his mother to log in, which totally doesn’t count, right? “though the defendant is a bright well-educated man, his method of circumvention of the order was neither clever nor effective,” said the judge. griffith is back behind bars until his hearing in september. [amy castor; order, pdf] na-no some nano fan on twitter wanted me to listen to a podcast featuring nano foundation director george coxon. i said i’d listen for $ — very cheap consulting indeed. after one guy tried paying me in nano, and multiple people attempted to explain to him in small words why digital pogs do not in fact constitute dollars, another guy sent $ in actual money via paypal. as a man of my word, i live-tweeted the ordeal. you’ll be unsurprised to hear that this podcast did not, in fact, sell me on nano. nano is an ambitious but insignificant research coin. and not even a very fast one — at transactions per second (for comparison, a private ethereum instance can do tps), it’s not quite taking over the world any time soon. and even without proof-of-work, nano still wants to use bitcoin’s broken and incompetent conspiracy theory economics — it’s all austrian economics, bitcoin variant. if this podcast is the sort of marketing that convinces nano’s bloody awful twitter pumpers, i can see why they’re like they are. the nicest thing i can say is they’re not as bad as the xrp or iota pumpers from back in the day. the next time someone wants me to listen to their crypto podcast, it’ll be $ . that’s actual-money usd, not your altcoin. a third podcast will probably be $ . i’ll double the price from there until the requests stop. [twitter] things happen no, amazon is not accepting bitcoin, you idiots. this rumour was entirely based on a single blue-sky job advertisement for a “digital currency and blockchain product lead” to “develop the case for the capabilities which should be developed” — which crypto promoters then pushed as far as they thought they could. [reuters; amazon, archive] microstrategy has announced its q earnings. businesses still buy their software — the useful thing that mstr does — giving the company “one of our best operational quarters in our software business in years, highlighted by % revenue growth” to $ . million. however, “digital asset impairment losses” are $ . million — it seems that bitcoin went down. but that’s fine — “going forward, we intend to continue to deploy additional capital into our digital asset strategy.” good luck with that. [press release] the huobi and okcoin crypto exchanges are closing their mainland chinese subsidiaries — both companies’ operations are now thoroughly hong kong-based. [scmp] not your json, not your coins. but the ethereum foundation turns out to be a touchable — and suable — entity. someone bought , eth in the original ethereum crowdfunding in , the download of the private key failed, and the foundation can’t give them a backup copy. the foundation offered the buyers , eth in a settlement, but they want all , eth. [wjla] hot takes bruce schneier: bitcoin isn’t usable as a currency. so if you’re getting bitcoins in bulk from anywhere other than buying them, you’re likely a criminal. [blog post] notes on mondex, an early stored-value card, and how a lot of what they did in the s is relevant to cbdcs today. [chyp] samantha keeper: the nft rube goldberg machine, or, why is nft art so lazy? “art and automation’s merger long predates cryptoart’s use of procedural generation. you’ll never hear nft sellers talk seriously about that history, though, cause it reveals not just nft art’s contradictions, but also its cynical laziness.” [storming the ivory tower]   even darkweb black markets don't want to deal with antivaxxers and covid deniers. pic.twitter.com/n pt q njz — malwaretech (@malwaretechblog) july ,   hard to believe it's only been six months since the gamestop short squeeze kicked off a populist uprising that will fundamentally change capitalism forever. — osita nwanevu (@ositanwanevu) july ,   your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedamazonasicaustraliabinanceblockficardanoel faroel salvadoretfethereum foundationgeorge coxongoldman sachshuobiimfjoseph longolinksmalaysiamicrostrategymondexnanonew jerseynftokcoinsolactivesouth koreatethervermontvirgil griffithwhizgrid post navigation previous article tether criminally investigated by justice department — when the music stops podcast next article news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack comments on “news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith” tv says: th july at : pm the first slide in that presentation is so generic and slapped together that they neglected to replace the euro sign with a dollar sign. reply david gerard says: th july at : pm lol, well spotted! reply elsie h. says: th july at : pm you’re not familiar with nokia, the world-famous manufacturer of rubber boots? i’m not sure what a rubber boots have to do with the blockchain, but they’d probably be a good investment for goldman sachs regardless. reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. oh, how i miss you tonight - wikipedia oh, how i miss you tonight from wikipedia, the free encyclopedia jump to navigation jump to search single by jeanne black "oh, how i miss you tonight" single by jeanne black b-side "a little bit lonely" released december genre pop length : label capitol songwriter(s) benny davis, joe burke, mark fisher jeanne black singles chronology "you'll find out" ( ) "oh, how i miss you tonight" ( ) "don't speak to me" ( ) "oh, how i miss you tonight" is a popular song, published in , written by benny davis, joe burke, and mark fisher. popular recordings of the song in were by ben selvin, benson orchestra of chicago, lewis james and irving kaufman.[ ] other notable recordings[edit] bing crosby - recorded july , for decca records with john scott trotter and his orchestra.[ ] perry como - recorded on november , for rca victor with russ case and his orchestra.[ ] jeanne black released a version of the song as a single which reached # on the u.s. pop chart.[ ] jim reeves[ ] glenda collins - decca f frank sinatra - included in his album all alone. nat king cole - for his album dear lonely hearts. frank fontaine abc-paramount rpm single burl ives - for the album my gal sal[ ] doris day - included in her album the love album. references[edit] ^ whitburn, joel ( ). joel whitburn's pop memories - . wisconsin, usa: record research inc. p.  . isbn  - - - . ^ "a bing crosby discography". bing magazine. international club crosby. retrieved august , . ^ "perry como discography". kokomo.ca/. retrieved august , . ^ "jeanne black, "oh, how i miss you tonight" chart position". retrieved november , . ^ " cat.com". cat.com. retrieved august , . ^ "discogs.com". discogs.com. retrieved august , . v t e deborah allen studio albums trouble in paradise cheat the night let me be the first delta dreamland notable singles "let's stop talkin' about it" "baby i lied" "i've been wrong before" "i hurt for you" related articles discography "can i see you tonight" "don't worry 'bout me baby" "i'm only in it for the love" "hurt me bad (in a real good way)" authority control musicbrainz work retrieved from "https://en.wikipedia.org/w/index.php?title=oh,_how_i_miss_you_tonight&oldid= " categories: songs singles songs with music by joe burke (composer) songs written by benny davis songs written by mark fisher (songwriter) jeanne black songs jim reeves songs deborah allen songs capitol records singles hidden categories: articles with short description short description is different from wikidata articles with haudio microformats wikipedia articles with musicbrainz work identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages nederlands edit links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement lovers in quarantine - wikipedia lovers in quarantine from wikipedia, the free encyclopedia jump to navigation jump to search film by frank tuttle lovers in quarantine lobby card directed by frank tuttle written by townsend martin (scenario) luther reed (scenario) based on quarantine by f. tennyson jesse produced by adolph zukor jesse lasky starring bebe daniels cinematography j. roy hunt distributed by paramount pictures release date october  ,   ( - - ) running time reels; , feet country united states language silent (english intertitles) lovers in quarantine is an extant american silent comedy film starring bebe daniels and directed by frank tuttle. it was produced by famous players-lasky and distributed by paramount pictures. the film is based on a broadway play quarantine by f. tennyson jesse.[ ][ ][ ] the film entered the public domain on january , .[ ] contents cast preservation references external links cast[edit] bebe daniels as diana harrison ford as anthony blunt alfred lunt as mackintosh josephs eden gray as pamela gordon edna may oliver as amelia pincent diana kane as lola ivan f. simpson as the silent passenger marie shotwell as mrs. borroughs gunboat smith as minor role (uncredited) preservation[edit] a print of lovers in quarantine is preserved at the library of congress.[ ][ ] references[edit] ^ the american film institute catalog feature films: - by the american film institute, c. ^ the afi catalog of feature films: lovers in quarantine(wayback) ^ progressive silent film list: lovers in quarantine at silentera.com ^ duke university public domain day ^ catalog of holdings the american film institute collection and the united artists collection at the library of congress by the american film institute, c. ^ the library of congress american silent feature film survival catalog: lovers in quarantine external links[edit] wikimedia commons has media related to category:lovers in quarantine. lovers in quarantine at imdb synopsis at allmovie lobby card v t e films directed by frank tuttle the cradle buster ( ) second fiddle ( ) youthful cheaters ( ) puritan passions ( ) grit ( ) dangerous money ( ) a kiss in the dark ( ) miss bluebeard ( ) the manicure girl ( ) the lucky devil ( ) lovers in quarantine ( ) the american venus ( ) the untamed lady ( ) love 'em and leave 'em ( ) kid boots ( ) blind alleys ( ) one woman to another ( ) time to love ( ) the spotlight ( ) love and learn ( ) something always happens ( ) easy come, easy go ( ) varsity ( ) his private life ( ) marquis preferred ( ) the canary murder case ( ) the studio murder mystery ( ) the greene murder case ( ) sweetie ( ) only the brave ( ) men are like that ( ) the benson murder case ( ) paramount on parade ( ) true to the navy ( ) love among the millionaires ( ) her wedding night ( ) it pays to advertise ( ) no limit ( ) dude ranch ( ) this reckless age ( ) this is the night ( ) the big broadcast ( ) dangerously yours ( ) pleasure cruise ( ) roman scandals ( ) ladies should listen ( ) springtime for henry ( ) here is my heart ( ) all the king's horses ( ) the glass key ( ) two for tonight ( ) college holiday ( ) waikiki wedding ( ) doctor rhythm ( ) paris honeymoon ( ) i stole a million ( ) charlie mccarthy, detective ( ) lucky jordan ( ) this gun for hire ( ) hostages ( ) the hour before the dawn ( ) don juan quilligan ( ) the great john l. ( ) suspense ( ) swell guy ( ) gunman in the streets ( ) the magic face ( ) hell on frisco bay ( ) a cry in the night ( ) island of lost women ( ) this article about a silent comedy film from the s is a stub. you can help wikipedia by expanding it. v t e retrieved from "https://en.wikipedia.org/w/index.php?title=lovers_in_quarantine&oldid= " categories: films american silent feature films american films based on plays films directed by frank tuttle comedy films films american comedy films american films american black-and-white films surviving american silent films comedy films s silent comedy film stubs hidden categories: articles with short description short description matches wikidata use mdy dates from september template film date with release date commons category link is locally defined all stub articles navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages bahasa indonesia norsk bokmål edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement catalog record: the golden cocoon; a novel | hathitrust digital library skip to main skip to similar items home menu about welcome to hathitrust our partnership our digital library our collaborative programs our research center news & publications collections help feedback search hathitrust log in hathitrust digital library search full-text index search field list all fields title author subject isbn/issn publisher series title available indexes full-text catalog full view only search hathitrust advanced full-text search advanced catalog search search tips the golden cocoon; a novel, by ruth cross. description tools cite this export citation file main author: cross, ruth. language(s): english published: new york, harper & brothers, . edition: first edition. physical description: p. l., p. cm. locate a print version: find in a library viewability item link original source full view university of michigan view hathitrust marc record similar items golden cocoon author lindstrom, virginia k., - published enchantment, a novel author cross, ruth, b. . published the steel cocoon, a novel author plagemann, bentz, - published soldier of good fortune : an historical novel author cross, ruth, b. . published the golden poppy; a novel author deprend, jeffrey. published the golden poppy a novel author deprend, jeffrey. published the golden house. a novel author warner, charles dudley, - . published the golden ones : a novel author slaughter, frank g. (frank gill), - . published the golden bowl; a novel author manfred, frederick feikema, - . published the golden honeycomb : a novel author markandaya, kamala, - . published home about collections help feedback accessibility take-down policy privacy contact data unbound : helping organizations access and share data effectively. special focus on web apis for data integration. data unbound helping organizations access and share data effectively. special focus on web apis for data integration. skip to content about some of what i missed from the cmd-d automation conference the cmd-d|masters of automation one-day conference in early august would have been right up my alley: it’ll be a full day of exploring the current state of automation technology on both apple platforms, sharing ideas and concepts, and showing what’s possible—all with the goal of inspiring and furthering development of your own automation projects. fortunately, those of us who missed it can still get a meaty summary of the meeting by listening to the podcast segment upgrade # : masters of automation – relay fm. i've been keen on automation for a long time now and was delighted to hear the panelists express their own enthusiasm for customizing their macs, iphones, or ipads to make repetitive tasks much easier and less time-consuming. noteworthy take-aways from the podcast include: something that i hear and believe but have yet to experience in person: non-programmers can make use of automation through applications such as automator — for macos — and workflow for ios. also mentioned often as tools that are accessible to non-geeks: hazel and alfred – productivity app for mac os x. automation can make the lives of computer users easier but it's not immediately obvious to many people exactly how. to make a lot of headway in automating your workflow, you need a problem that you are motivated to solve. many people use applescript by borrowing from others, just like how many learn html and css from copying, pasting, and adapting source on the web. once you get a taste for automation, you will seek out applications that are scriptable and avoid those that are not. my question is how to make it easier for developers to make their applications scriptable without incurring onerous development or maintenance costs? e-book production is an interesting use case for automation. people have built businesses around scripting photoshop [is there really a large enough market?] omnigroup's automation model is well worth studying and using. i hope there will be a conference next year to continue fostering this community of automation enthusists and professionals. raymond yee automation macos comments ( ) permalink fine-tuning a python wrapper for the hypothes.is web api and other #ianno followup in anticipation of #ianno hack day, i wrote about my plans for the event, one of which was to revisit my own python wrapper for the nascent hypothes.is web api. instead of spending much time on my own wrapper, i spent most of the day working with jon udell's wrapper for the api. i've been working on my own revisions of the library but haven't yet incorporated jon's latest changes. one nice little piece of the puzzle is that i learned how to introduce retries and exponential backoff into the library, thanks to a hint from nick stenning and a nice answer on stackoverflow . other matters in addition to the python wrapper, there are other pieces of follow-up for me. i hope to write more extensively on those matters down the road but simply note those topics for the moment. videos from the conference i might start by watching videos from #ianno conference: i annotate – youtube. because i didn't attend the conference per se, i might glean insight into two particular topics of interest to me (the role of page owner in annotations and the intermingling of annotations in ebooks.) an extension for embedding selectors in the url i will study and try treora/precise-links: browser extension to support web annotation selectors in uris. i've noticed that the same annotation is shown in two related forms: https://hyp.is/zj dyi teeetmxvupjlhsw/blog.dataunbound.com/ / / /revisiting-hypothes-is-at-i-annotate- / https://blog.dataunbound.com/ / / /revisiting-hypothes-is-at-i-annotate- /#annotations:zj dyi teeetmxvupjlhsw does the precise-links extension let me write the selectors into the url? raymond yee annotation comments ( ) permalink revisiting hypothes.is at i annotate i'm looking forward to hacking on web and epub annotation at the #ianno hack day. i won't be at the i annotate conference per se but will be curious to see what comes out of the annual conference. i continue to have high hopes for digital annotations, both on the web and in non-web digital contexts. i have used hypothesis on and off since oct . my experiences so far: i like the ability to highlight and comment on very granular sections of articles for comment, something the hypothes.is annotation tool makes easy to do. i appreciate being able to share annotation/highlight with others (on twitter or facebook), though i'm pretty sure most people who bother to click on the links might wonder "what's this" when they click on the link. a small user request: hypothes.is should allow a user to better customize the facebook preview image for the annotation. i've enjoyed using hypothes.is for code review on top of github. (exactly how hypothes.is complements the extensive code-commenting functionality in github might be worth a future blog post.) my plans for hack day python wrapper for hypothes.is this week, i plan to revisit rdhyee/hypothesisapi: a python wrapper for the nascent hypothes.is web api to update or abandon it in favor of new developments. (for example, i should look at kshaffer/pypothesis: python scripts for interacting with the hypothes.is api.) epubs + annotations i want to figure out the state of art for epubs and annotations. i'm happy to see the announcement of a partnership to bring open annotation to ebooks from march . i'd definitely like to figure out how to annotate epubs (e.g., oral literature in africa (at unglue.it) or moby dick). the best approach is probably for me to wait until summer at which time we'll see the fruits of the partnership: together, our goal is to complete a working integration of hypothesis with both epub frameworks by summer . nyu plans to deploy the readiumjs implementation in the nyu press enhanced networked monographs site as a first use case. based on lessons learned in the nyu deployment, we expect to see wider integration of annotation capabilities in ebooks as epub uptake continues to grow. in the meantime, i can catch up on the current state of futurepress/epub.js: enhanced ebooks in the browser., grok epub cfi updates, and relearn how to parse epubs using python (e.g., rdhyee/epub_avant_garde: an experiment to apply ideas from https://github.com/sandersk/ebook_avant_garde to arbitrary epubs). role of page owners i plan to check in on what's going on with efforts at hypothes.is to involve owners in page annotations: in the past months we launched a small research initiative to gather different points of view about website publishers and authors consent to annotation. our goal was to identify different paths forward taking into account the perspectives of publishers, engineers, developers and people working on abuse and harassment issues. we have published a first summary of our discussion on our blog post about involving page owners in annotation. i was reminded of these efforts after reading that audrey watters had blocked annotation services like hypothes.is and genius from her domains: un-annotated episode : marginalia in the spirit of communal conversation, i threw in my two cents: have there been any serious exploration of easy opt-out mechanisms for domain owners? something like robots.txt for annotation tools? raymond yee annotation comments ( ) permalink my thoughts about fargo.io using fargo.io raymond yee uncategorized comments ( ) permalink organizing your life with python: a submission for pycon ? i have penciled into my calendar a trip  to montreal to attend pycon .   in my moments of suboptimal planning, i wrote an overly ambitious abstract for a talk or poster session i was planning to submit.  as i sat down this morning to meet the deadline for submitting a proposal for a poster session (nov ), i once again encountered the ominous (but for me, definitive) admonition: avoid presenting a proposal for code that is far from completion. the program committee is very skeptical of "conference-driven development". it's true: my efforts to organize my life with python are in the early stages. i hope that i'll be able to write something like the following for pycon . organizing your life with python david allen's getting things done (gtd) system is a popular system for personal productivity. although gtd can be implemented without any computer technology, i have pursued two different digital implementations, including my current implementation using evernote, the popular note-taking program. this talk explores using python in conjunction with the evernote api to implement gtd on top of evernote. i have found that a major practical hinderance for using gtd is that it way too easy to commit to too many projects. i will discuss how to combine evernote, python, gtd with concepts from personal kanban to solve this problem. addendum: whoops…i find it embarrassing that i already quoted my abstract in a previous blog post in september that i had forgotten about. oh well. where's my fully functioning organization system when i need it! tagged pycon, python raymond yee evernote gtd comments ( ) permalink current status of data unbound llc in pennsylvania i'm currently in the process of closing down data unbound llc in pennsylvania.  i submitted the paperwork to dissolve the legal entity in april and have been amazed to learn that it may take up to a year to get the final approval done.  in the meantime, as i establishing a similar california legal entity, i will certainly continue to write on this blog about apis, mashups, and open data. raymond yee data unbound llc comments ( ) permalink must get cracking on organizing your life with python talk and tutorial proposals for pycon are due tomorrow ( / ) .  i was considering submitting a proposal until i took the heart the appropriate admonition against "conference-driven" development of the program committee.   i will nonetheless use the oct and nov deadlines for lightning talks and proposals respectively to judge whether to submit a refinement of the following proposal idea: organizing your life with python david allen's getting things done (gtd) system is a popular system for personal productivity.  although gtd can be implemented without any computer technology, i have pursued two different digital implementations, including my current implementation using evernote, the popular note-taking program.  this talk explores using python in conjunction with the evernote api to implement gtd on top of evernote. i have found that a major practical hinderance for using gtd is that it way too easy to commit to too many projects.  i will discuss how to combine evernote, python, gtd with concepts from personal kanban to solve this problem.   raymond yee getting things done python comments ( ) permalink embedding github gists in wordpress as i gear up i to write more about programming, i have installed the embed github gist plugin. so by writing [gist id= ] in the text of this post, i can embed https://gist.github.com/rdhyee/ into the post to get: from itertools import islice def triangular(): n = i = while true: yield n i += n += i # for i, n in enumerate(islice(triangular(), )): print i+ , n tagged gist, github raymond yee wordpress comments ( ) permalink working with open data i'm very excited to be teaching a new course working with open data at the uc berkeley school of information in the spring semester: open data — data that is free for use, reuse, and redistribution — is an intellectual treasure-trove that has given rise to many unexpected and often fruitful applications. in this course, students will ) learn how to access, visualize, clean, interpret, and share data, especially open data, using python, python-based libraries, and supplementary computational frameworks and ) understand the theoretical underpinnings of open data and their connections to implementations in the physical and life sciences, government, social sciences, and journalism.   raymond yee uncategorized comments ( ) permalink a mundane task: updating a config file to retain old settings i want to have a hand in creating an excellent personal information manager (pim) that can be a worthy successor to ecco pro. so far, running eccoext (a clever and expansive hack of ecco pro) has been a eminently practical solution.   you can download the most recent version of this actively developed extension from the files section of the ecco_pro yahoo! group.   i would do so regularly but one of the painful problems with unpacking (using unrar) the new files is that there wasn't an updater that would retain the configuration options of the existing setup.  so a mundane but happy-making programming task of this afternoon was to write a python script to do exact that function, making use of the builtin configparser library. """ compare eccoext.ini files my goal is to edit the new file so that any overlapping values take on the current value """ current_file_path = "/private/tmp/ /c/program files/ecco/eccoext.ini" new_file_path = "/private/tmp/ /c/utils/eccoext.ini" updated_file = "/private/tmp/ /c/utils/updated_eccoext.ini" # extract the key value pairs in both files to compare the two # http://docs.python.org/library/configparser.html import configparser def extract_values(fname): # generate a parsed configuration object, set of (section, options) config = configparser.safeconfigparser() options_set = set() config.read(fname) sections = config.sections() for section in sections: options = config.options(section) for option in options: #value = config.get(section,option) options_set.add((section,option)) return (config, options_set) # process current file and new file (current_config, current_options) = extract_values(current_file_path) (new_config, new_options) = extract_values(new_file_path) # what are the overlapping options overlapping_options = current_options & new_options # figure out which of the overlapping options are the values different for (section,option) in overlapping_options: current_value = current_config.get(section,option) new_value = new_config.get(section,option) if current_value != new_value: print section, option, current_value, new_value new_config.set(section,option,current_value) # write the updated config file with open(updated_file, 'wb') as configfile: new_config.write(configfile) raymond yee ecco pro python comments ( ) permalink « older posts pages about categories amazon annotation announcments apis architecture art history automation bibliographics bioinformatics bplan chickenfoot citizendium collaboration consulting copyright creative commons data mining data unbound llc digital scholarship ecco pro education evernote firefox flickr freebase getting things done google government gtd hardware hci higher education humanities imaging ischool journalism libraries macos mashups meta mith api workshop mixing and remixing information notelets oclc open access open data openid personal information management personal news politics processing programming tip prototype publishing python recovery.gov tracking repositories rest screen scraping screencast services soap training tutorial uc berkeley uncategorized web hosting web services web weblogging wikipedia wordpress writing zotero tags api art history books chickenfoot codepad coins creative commons data hosting data portability educause exif firefox flickr freebase jcdl jcdl kses library of congress mashups mashup symfony django metadata news nytimes amazonec amazons omb openid openlibrary openoffice.org photos politics project bamboo python pywin recovery.gov tracking screencast stimulus sychronization video webcast wikipedia windows xp wmi wordpress workshops xml in libraries zotero blogroll information services and technology, uc berkeley uc berkeley rss feeds all posts all comments meta log in blog search © | thanks, wordpress | barthelme theme by scott allan wallick | standards compliant xhtml & css | rss posts & comments when will ethereum . fully launch? roadmap promises speed, but history says otherwise $ btc $ , eth $ , xrp $ . bch $ xmr $ . dash $ eos $ . zec $ ada $ . neo $ . bnb $ xlm $ . usdt $ . miota $ . doge $ . x btc $ , - . % eth $ , - . % xrp $ . + . % bch $ + . % eos $ . + . % doge $ . + . % english advertise careers news bitcoin news ripple news ethereum news litecoin news altcoin news blockchain news business news technology news policy & regulations markets market news price indexes market analysis heatmap top cryptocurrencies magazine people top top opinion expert take interview cryptopedia explained how to crypto bitcoin ethereum dogecoin bitcoin cash ripple lightning network altcoin defi trading nft industry dapplist events jobs press releases store blockshow consulting consulting services technology providers industry reports video markets pro julia magas dec , when will ethereum . fully launch? roadmap promises speed, but history says otherwise the new ethereum . roadmap review: what updates have been added, and how soon can they be implemented? total views total shares listen to article : analysis on dec. , shortly after the long-awaited release of ethereum . , platform founder vitalik buterin announced an updated roadmap. at first glance, it does not differ much from the previous version from march. however, it brought some clarity on current progress and further stages, giving grounds for estimating how soon a full-fledged transition to proof-of-stake and the launch of sharding can be expected. just a spoiler: the full implementation of ethereum . will not be coming soon. formally ethereum . , but not yet dec. marked a pivotal event for the entire crypto industry as the first block of the new ethereum network was generated, the one developers had been preparing to see through for the past few years. ethereum . is expected to become a super-fast, reliable version of the previous blockchain, all thanks to so-called sharding and the transition to the pos consensus algorithm. in fact, the update that came out under the name ethereum . is not entirely what its namesake claims to be, and the beacon chain, its first phase, is actually phase . the beacon chain is needed exclusively for the development and testing of innovations that, if successful, will be introduced into the main ethereum . network. thus, the second upgrade is more fundamental, as the platform will finally let go of proof-of-work and will be fully supported by the stakers. simply put, phase — aka the beacon chain — lays a basis for implementing staking and sharding in the next upgrade, or as figuratively explained by the ethereum team, serves as “a new engine” of the future spacecraft. even though ethereum formally switched to version . , the network still depends on the computing power of miners. the developers also launched pos in parallel to gradually recruiting the stakers necessary to ensure the stable operation of the network. praneeth srikanti, investment principle at consensys ventures, discussed with cointelegraph the structure and functionality of the beacon chain: “the new beacon chain runs on casper pos for itself and the shard chains — and would ultimately be managing validators, choosing a block proposer for each shard and organizing validator groups (in the form of committees) for voting on the proposed blocks and managing consensus rules.” srikanti added that the pos mechanism is already live on the beacon chain and that it requires attestations for shard blocks and pos votes for the beacon chain blocks. the network is now ready enough for users to join and become validators. to do so, they need to have ether (eth) in their accounts, locked for transfer and exchange until the network fully transitions to new characteristics. the rewards that validators receive for supporting the new blockchain will also be locked until the release of the next phase, meaning that stakers will probably be able to access their funds no earlier than or . commenting on how the changes in the ethereum . roadmap can affect stakers, jay hao, ceo of okex, told cointelegraph: “while it does most likely mean that users will have to wait longer until they can withdraw their eth from staking, there are still many advantages to staking eth. to start with, stakers are supporting the move to eth . and the eth community. they will earn generous rewards when they do withdraw and, it is always possible (especially in this fast-paced industry) that other solutions will appear that expedite this new timeline.” the implementation of shards — another unique invention of ethereum, thanks to which the network will be able to provide services to hundreds of millions of users — will also be available only in future versions of the blockchain. it’s expected that there will be of them in the new ethereum network, with the beacon chain acting as a control blockchain. the paradox is that sharding is not applied to the beacon chain, which will actually be the focal point of the network. current progress the ethereum development team has been repeatedly criticized for missing deadlines and constantly delaying the updates. so, what is the real state of affairs at this time? judging by the progress bar that the developers of ethereum have added to the new roadmap, the implementation of the second update is not expected anytime soon. work on the most important tasks necessary for a full transition to a new network — namely, eth /eth merge implementation — is in its early stages, with about % completed. things are more positive on the sharding frontier, with about half of the work already done, judging by the progress bar. the good news is that the new roadmap is missing phases . , and others that were present in previous versions of the document. this means that a full-fledged transition to a new network can be expected sooner and that the next phase will be the final one, combining all of the most important updates. earlier, it was expected that shard chains would appear in phase , and only after that, in the second phase, would snark/stark transactions become possible. now, all of these updates are expected to be launched under phase , and some progress has already been made toward that end. the organization of the teams’ work has also changed from step-by-step to parallel. the new roadmap suggests that the execution of each task is organized autonomously and is not disrupted in the event of difficulties with the other segments. in other words, different teams can work on different tasks at the same time, which may speed up the transition to the new network. some of the tasks can be expected soon, as indicated by the roadmap. in particular, the developers have already done the bulk of the work on implementing the eip protocol, aimed at stabilizing the cost of commissions on the network and gas repricing. in addition, the release of evm , which will allow for faster operations of the ethereum virtual machine, is in the process of transitioning to a more advanced version called “ethereum-flavored webassembly,” or ewasm. interestingly, ewasm is the only major implementation missing in the new roadmap. it will probably come as part of the upgrade called “vm upgrades,” and its implementation will not be carried out in the next phase. it’s expected that ewasm will manage the work of smart contracts and make the network more decentralized. layer-two solutions advancing scalability and security such as snark/stark operations, post-quantum crypto and the launch of cbc casper — an improved version of the protocol that will mark the final transition of the network to the staking model — remain among the solutions that are likely to appear much later on. when will eth fully launch? looking at how fast the relevant updates were implemented in the previous versions of ethereum roadmaps, it turns out that the planned and real release dates are about a year apart, at the very minimum. thus, for example, according to the estimates made by the developers of large blockchain software company consensys in may , the release of the beacon chain blockchain was supposed to have happened back in . regarding the ewasm release, the full-scale launch of the machine is supposed to occur in or . this means that it should be expected to come no earlier than to — the time frame that coincides with the one set by the ethereum developer team for the ethereum . mainnet release. still, the full scope of work that needs to be done before the ethereum . blockchain becomes fully complete can make it challenging to set predictions. meanwhile, some suggest that upgrade releases could be delayed for an even longer period of time. youtube crypto blogger boxmining recommended adding one to two years to the previous estimates, suggesting that the market will see casper and sharding in full glory only in to . a more pessimistic forecast suggests that it might take years before the market will see the final version of ethereum . . himanshu bisht, marketing head at razor network — which operates on a pos consensus algorithm — told cointelegraph that such a timeframe is realistic: “mainnet ethereum will need to ‘merge’ with the beacon chain at some point. this will be the start of a new phase of the ethereum ecosystem in a true sense. however, we might not be able to see this before february, .” nir kshetri, a professor at the university of north carolina-greensboro and a research fellow at kobe university, agreed that the ethereum . transition is likely to take a fair bit of time. according to him, the evm upgrade is a challenging process, as he further told cointelegraph: “organizations are likely to be effectively locked in evm and it is difficult to break the self-reinforcing mechanism. there are already millions of existing smart contracts and enormous amounts of tools and languages, optimizations. on top of that convincing ethereum users that the pos is safe and secure is a challenge of another magnitude.” paolo ardoino, chief technology officer of crypto exchange bitfinex, told cointelegraph that the full transition to ethereum . could take three years, although he doesn’t rule out a faster development: “i think that after this initial phase, it is likely that the pace of ethereum . development will improve over the coming year. we wonder if full ethereum . transition will be complete up to years from now, but we expect token transfers will likely be available earlier than that.” on the other hand, a streamlined organization of ethereum client operations and the work of developers, as well as immense assistance from the community, can significantly reduce the time frame of the roadmap. in general, as the beacon chain explorer shows, the deployment of the new pos network is proceeding successfully. at the moment, more than , users have become stakers, with almost . million eth staked so far. #proof-of-stake #ethereum related news blockchain can disrupt higher education today, global labor market tomorrow q&a: here’s what you may not know about privacy preserving computation networks the rise of dex robots: amms push for an industrial revolution in trading bitcoin still on track to $ k despite growing risks, says strategic investor lyn alden jpmorgan report: eth could kick-start $ b staking industry by finance redefined: the $ million bet on eth . making waves! june -july load more articles editor’s choice london court orders binance to trace hackers behind $ . m fetch.ai attack ethereum alone not enough to disrupt big tech: jack dorsey opensea trading volume explodes , % ytd amid nft boom wyoming’s crypto-friendly bill could be a sandbox in action, sen. lummis says ada hits $ for the first time since may ahead of cardano smart contract announcement cointelegraph youtube subscribe advertise with us the dying swan - wikipedia the dying swan from wikipedia, the free encyclopedia jump to navigation jump to search the dying swan anna pavlova in costume for the dying swan, buenos aires, argentina, c. choreographer mikhail fokine music camille saint-saëns, (le cygne from le carnaval des animaux) premiere st. petersburg, russia created for anna pavlova genre romantic type classical ballet the dying swan (originally the swan) is a solo dance choreographed by mikhail fokine to camille saint-saëns's le cygne from le carnaval des animaux as a pièce d'occasion for the ballerina anna pavlova, who performed it about , times. the short ballet ( minutes) follows the last moments in the life of a swan, and was first presented in st. petersburg, russia in . the ballet has since influenced modern interpretations of odette in tchaikovsky's swan lake and has inspired non-traditional interpretations as well as various adaptations. contents background plot summary performances and critical commentary legacy see also references external links background[edit] inspired by swans that she had seen in public parks and by lord tennyson's poem "the dying swan", anna pavlova, who had just become a ballerina at the mariinsky theatre, asked michel fokine to create a solo dance for her for a gala concert being given by artists from the chorus of the imperial mariinsky opera. fokine suggested saint-saëns's cello solo, le cygne, which fokine had been playing at home on a mandolin to a friend's piano accompaniment, and pavlova agreed. a rehearsal was arranged and the short dance was completed quickly.[ ] fokine remarked in dance magazine (august ): it was almost an improvisation. i danced in front of her, she directly behind me. then she danced and i walked alongside her, curving her arms and correcting details of poses. prior to this composition, i was accused of barefooted tendencies and of rejecting toe dancing in general. the dying swan was my answer to such criticism. this dance became the symbol of the new russian ballet. it was a combination of masterful technique with expressiveness. it was like a proof that the dance could and should satisfy not only the eye, but through the medium of the eye should penetrate the soul.[ ] in , fokine told dance critic arnold haskell: small work as it is, [...] it was 'revolutionary' then, and illustrated admirably the transition between the old and the new, for here i make use of the technique of the old dance and the traditional costume, and a highly developed technique is necessary, but the purpose of the dance is not to display that technique but to create the symbol of the everlasting struggle in this life and all that is mortal. it is a dance of the whole body and not of the limbs only; it appeals not merely to the eye but to the emotions and the imagination.[ ] plot summary[edit] le cygne (from le carnaval des animaux) performed by john mitchel problems playing this file? see media help. the ballet was first titled the swan but then acquired its current title, following pavlova's interpretation of the work's dramatic arc as the end of life. the dance is composed principally of upper body and arm movements and tiny steps called pas de bourrée suivi.[ ] french critic andré levinson wrote: arms folded, on tiptoe, she dreamily and slowly circles the stage. by even, gliding motions of the hands, returning to the background from whence she emerged, she seems to strive toward the horizon, as though a moment more and she will fly—exploring the confines of space with her soul. the tension gradually relaxes and she sinks to earth, arms waving faintly as in pain. then faltering with irregular steps toward the edge of the stage—leg bones quiver like the strings of a harp—by one swift forward-gliding motion of the right foot to earth, she sinks on the left knee—the aerial creature struggling against earthly bonds; and there, transfixed by pain, she dies.[ ] performances and critical commentary[edit] the dying swan was first performed by pavlova at a gala at the noblemen's hall in saint petersburg, russia on friday, december , .[ ] it was first performed in the united states at the metropolitan opera house in new york city on march , . american dance critic and photographer carl van vechten noted that the ballet was "the most exquisite specimen of [pavlova's] art which she has yet given to the public."[ ] pavlova performed the piece approximately , times,[ ] and on her deathbed in the hague, reportedly cried, "prepare my swan costume."[ ][ ] fokine's granddaughter, isabelle, notes that the ballet does not make "enormous technical demands" on the dancer but it does make "enormous artistic ones because every movement and every gesture should signify a different experience," which is "emerging from someone who is attempting to escape death." she notes that modern performances are significantly different from her grandfather's original conception and that the dance today is often made to appear to be a variation of swan lake, which she describes as "odette at death's door." isabelle says that the ballet is not about a ballerina being able to transform herself into a swan, but about death, with the swan as a metaphor.[ ] legacy[edit] the dying swan by anna pavlova ( sec), ~ ; yvette chauvire ( sec), ~ ; natalia makarova ( sec) pavlova was recorded dancing the dying swan in a silent film, to which sound is often added. the short ballet has influenced interpretations of odette in tchaikovsky's swan lake, particularly during the parting of the lovers in the first lakeside scene.[ ] the dance was almost immediately adapted by various ballerinas internationally. as a result, fokine published an official version of the choreography in , highlighted with photographs of his wife vera fokina demonstrating the ballet's sequential poses. at a later date, kirov-trained natalia makarova commented: of fokine's original choreography [...] only scattered fragments remain [...] he created only the bourrées [a walking or running ballet step usually executed on the points of the toes] for pavlova. subsequently, every performer [...] has used the piece at her own taste and at her own risk [...] in russia i had danced dudinskaya's version and [...] experienced a certain discomfort [...] from all the sentimental stuff—the rushing around the stage, the flailing of the arms [...] to the contemporary eye, its conventions look almost ludicrous [...] the dance needs total emotional abandon, conveying the image of a struggle with death or a surrender to it [...] as for the emotional content, i was helped by pavlova, whose film of the work i saw. even today, her swan is striking—the flawless feeling for style, the animated face—although certain melodramatic details seem superfluous.[ ] the ballet has been variously interpreted and adapted. the russian film the dying swan by director yevgeni bauer is the story of an artist who strangles a ballerina.[ ] maya plisetskaya interpreted the swan as elderly and stubbornly resisting the effects of aging, much like herself. eventually, the piece came to be considered one of pavlova's trademarks.[ ] more recently, les ballets trockadero de monte carlo has performed a parody version that emphasizes every excess dormant in the choreography.[ ] in , street theatre artist judith lanigan created a hula hoop adaptation that has been performed at international street theatre festivals, comedy and burlesque events, and in traditional and contemporary circuses.[ ] several figure skaters have performed the dying swan with skate-choreography inspired by the ballet. olympic bronze medallist maribel vinson reviewed sonja henie's professional debut for the new york times, noting: the crowd settled quickly into a receptive mood for sonja's famous interpretation of the dying swan of saint-saëns. with spotlights giving the ice the effect of water at night, miss henie, outlined in a blue light, performed the dance made immortal by pavlova. whether one agrees that such posturing is suited to the medium of ice, there is no doubt that miss henie's rendition is a lovely thing. too much toe work at the start leaves the feeling that this does not belong to skating, but when she glides effortlessly back and forth, she is free as a disembodied spirit and there is an ease of movement that ballet never can produce.[ ] some ballerinas, including ashley bouder of new york city ballet and nina ananiashvili, formerly of american ballet theatre and the bolshoi ballet, have used dying swan arms in swan lake when making odette's exit at the end of act ii (the first lakeside scene).[ ] ogden nash, in his "verses for camille saint-saëns' 'carnival of the animals'", mentions pavlova: the swan can swim while sitting down, for pure conceit he takes the crown, he looks in the mirror over and over, and claims to have never heard of pavlova. in response to impact of the – coronavirus pandemic on the performing arts, carlos acosta, artistic director of the birmingham royal ballet, adapted fokine's choreography with the ballerina raising her head at the end instead, and with céline gittens, principal dancer of the company, and the musicians performing in their respective homes.[ ] misty copeland, principal dancer with the american ballet theatre, invited other dancers to dance the swan to raise fund for the relief fund of the participating dancers' companies and other related funds.[ ] see also[edit] alicia markova "the dying swan" (painting) references[edit] notes ^ a b balanchine & mason , p.  . ^ balanchine & mason , pp.  – . ^ a b , balanchine & mason , p.  . ^ a b gerskovic , p.  . ^ oxford dictionary of dance. ^ a b mccauley , p.  . ^ gerskovic , p.  . ^ carter , p.  . ^ aloff , pp.  – . ^ youngblood , p.  . ^ garafola , pp.  – . ^ les ballets trockadero de monte carlo. ^ judith lanigan. ^ skate web's historical skating pictures. ^ smodyrev biography. ^ winship, lyndsey ( april ). "the swan: three minutes of dance to soothe the soul in lockdown". the guardian. ^ stahl, jennifer ( may ). " ballerinas from around the world perform "the dying swan" for covid- relief". dance magazine. references aloff, mindy ( ). dance anecdotes: stories from the world of ballet, broadway, the ballroom, and modern dance. oxford: oxford university press. isbn  - - - - . balanchine, george; mason, francis ( ). stories of the great ballets. new york: anchor books. isbn  - - - - . carter, alexandra ( ). rethinking dance history: a reader. london: routledge. isbn  - - - - . garafola, lynn ( ). legacies of twentieth-century dance. new york: wesleyan university press. isbn  - - - - . gerskovic, robert ( ) [ ]. ballet : a complete guide to learning and loving the ballet. pompton plains, nj: limelight editions. isbn  - - - - . mccauley, martin ( ). who's who in russia since . london and new york: routledge. isbn  - - - - . youngblood, denise jeanne ( ). the magic mirror: moviemaking in russia, – . madison, wi: university of wisconsin press. isbn  - - - - . further reading "the dying swan". oxford dictionary of dance. oxford university press. retrieved august , . "les ballets trockadero de monte carlo". glbtq, inc. . archived from the original on july , . retrieved august , . "judith lanigan". judith lanigan. . retrieved august , . "sonja henie does the swan thing". skateweb's historical skating pictures. skateweb. – . retrieved august , . smodyrev, mikhail; translated by anna korisch. "nina ananiashvili's biography and repertory". retrieved august , . external links[edit] wikimedia commons has media related to le cygne. "pas de bourrée". merriam-webster online. merriam-webster. . retrieved august , . "the dying swan" by tennyson (complete text) retrieved from "https://en.wikipedia.org/w/index.php?title=the_dying_swan&oldid= " categories: ballet premieres ballets by michel fokine ballets to the music of camille saint-saëns hidden categories: articles with haudio microformats commons category link is on wikidata good articles articles containing video clips navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages deutsch español français italiano മലയാളം nederlands 日本語 norsk nynorsk ਪੰਜਾਬੀ Русский svenska edit links this page was last edited on april , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement low-code development platform - wikipedia low-code development platform from wikipedia, the free encyclopedia jump to navigation jump to search a low-code development platform (lcdp) provides a development environment used to create application software through a graphical user interface instead of traditional hand-coded computer programming. a low-coded platform may produce entirely operational applications, or require additional coding for specific situations. low-code development platforms reduce the amount of traditional hand coding, enabling accelerated delivery of business applications. a common benefit is that a wider range of people can contribute to the application's development—not only those with coding skills. lcdps can also lower the initial cost of setup, training, deployment and maintenance.[ ] low-code development platforms trace their roots back to fourth-generation programming language and the rapid application development tools of the s and early s. similar to these predecessor development environments, lcdps are based on the principles of model-driven design, automatic code generation, and visual programming.[ ] the concept of end-user development also existed previously, although lcdps brought some new ways of approaching this development. the low-code development platform market traces its origins back to .[ ] the specific name "low-code" was not put forward until june, ,[ ] when it was used by the industry analyst forrester research. along with no-code development platforms, low-code was described as "extraordinarily disruptive" in forbes magazine in .[ ] contents use reception security and compliance concerns analyst coverage and crowd evaluation criticisms low-code vs. no-code see also references external links use[edit] as a result of the micro computer revolution businesses have deployed computers widely across their employee bases, enabling widespread automation of business processes using software. the need for software automation and new applications for business processes places demands on software developers to create custom applications in volume, tailoring them to organizations' unique needs.[ ] low-code development platforms have been and are developed as a means to allow for quick creation and use of working applications that can address the specific process- and data needs of the organization.[ ] reception[edit] research firm forrester estimated in that the total market for low-code development platforms would grow to $ . billion by .[ ] segments in the market include database, request handling, mobile, process and general purpose low-code platforms.[ ] low-code development's market growth can be attributed to its flexibility and ease.[ ] low-code development platforms are shifting focus towards general purpose of applications, with the ability to add in custom code when needed or desired.[ ] mobile accessibility is one of the driving factors of using low-code development platforms.[ ] instead of developers having to spend time creating multi-device software, low-code packages typically come with that feature standard.[ ] because they require less coding knowledge, nearly anyone in a software development environment can learn to use a low-code development platform. features like drag and drop interfaces help users visualize and build the application[ ] security and compliance concerns[edit] concerns over low-code development platform security and compliance are growing, especially for apps that use consumer data. there can be concerns over the security of apps built so quickly and possible lack of due governance leading to compliance issues.[ ] however, low-code apps do also fuel security innovations. with continuous app development in mind, it becomes easier to create secure data workflows. still the fact remains that low-code development platforms that do not apply and strictly adhere to normalized systems theory[ ] do not solve the challenge of increasing complexity due to changes.[ ] analyst coverage and crowd evaluation[edit] a forrester report about low-code development platforms ("the forrester wave™: low-code development platforms, q ") featured a -criteria evaluation of low-code development platform providers.[ ] an updated forrester report charting the growth of the low-code market was published in july (vendor landscape: a fork in the road for low-code development platforms) highlighting industry trends:[ ] growth - the low-code market is forecast to increase to over $ billion over the next five years. diversification - two major developing market segments focus on the needs of business ("citizen") developers and of ad&d (app dev) professionals. integration - as adoption of low-code expands and businesses look towards technologies like ai, robotics and machine learning, solutions must grow to offer these capabilities. a g crowd report about low-code development platforms evaluated market share and user reviews for products.[ ] forrester published an updated report in august . the report covers key trends including the continuing adoption of low code platforms by enterprise companies and the merging of low code platforms with existing developer tools into a broader application development ecosystem.[ ] criticisms[edit] some it professionals question whether low-code development platforms are suitable for large-scale and mission-critical enterprise applications.[ ] others have questioned whether these platforms actually make development cheaper or easier.[ ] additionally, some cios have expressed concern that adopting low-code development platforms internally could lead to an increase in unsupported applications built by shadow it.[ ] low-code vs. no-code[edit] main article: no-code development platform no-code development platforms are similar to low-code development platforms but require no coding at all.[ ] the line between the two is not sharp. however, there are a number of key differences: app creator - no-code platforms are accessible to any end-business user while low-code platforms require professional developers who can work within the platform's constraints. core design - no-code platforms tend to function off a model-driven, declarative approach where the end user dictates an app's design through drag and drop manipulation or simple expressions. low-code platforms depend more on hand coding to specify an application's core architecture.[ ] user interface - no-code platforms most often rely on a preset user interface layer which simplifies and streamlines an app's design. low-code platforms may provide greater flexibility in ui options at the cost of additional coding and complexity requirements.[ ] see also[edit] drakon end-user computing end-user development flow-based programming list of online database creator apps list of low-code development platforms visual programming language references[edit] ^ a b richardson, clay (june , ). "new development platforms emerge for customer-facing applications". www.forrester.com. retrieved november . ^ lonergan, kevin ( july ). "on the down low: why cios should care about low-code - information age". information age. information age. archived from the original on february . retrieved january . ^ a b marvin, rob ( august ). "how low-code development seeks to accelerate software delivery - sd times". sd times. san diego times. retrieved november . ^ a b bloomberg, jason. "the low-code/no-code movement: more disruptive than you realize". www.forbes.com. retrieved august . ^ a b c marvin, rob. "building an app with no coding: myth or reality?". pcmag. pc mag. retrieved november . ^ http://www.zdnet.com/article/developers-were-on-board-with-low-code-tools/ ^ a b richardson, clay. "vendor landscape: the fractured, fertile terrain of low-code application platforms" (pdf). forrester research. archived from the original (pdf) on - - . retrieved - - . ^ hammond, jeffrey. "the forrester wave™: mobile low-code platforms for business developers, q ". www.forrester.com. forrester research. archived from the original on august . retrieved august . ^ a b c rubens, paul. "use low-code platforms to develop the apps customers want". cio. cio magazine. ^ mannaert, herwig; verelst, jan; de bruyn, peter ( ). normalized systems theory: from foundations for evolvable software toward a general theory for evolvable design. isbn  . ^ richardson, clay. "the forrester wave™: low-code development platforms, q ". www.forrester.com. forrester research. archived from the original on november . retrieved november . ^ rymer, john ( july ). "vendor landscape: a fork in the road for low-code development platforms". forrester research. archived from the original on february . retrieved september . ^ "archived copy". archived from the original on - - . retrieved - - .cs maint: archived copy as title (link) ^ hammond, jeffrey. "the forrester wave™: mobile low-code platforms for business developers, q ". www.forrester.com. forrester research. archived from the original on august . retrieved august . ^ rymer, john. "low-code platforms deliver customer facing apps fast, but can they scale up?". forrester research. archived from the original on february . retrieved january . ^ reselman, bob. "why the promise of low-code software platforms is deceiving". techtarget. archived from the original on may . retrieved may . ^ shore, joel ( july ). "how no-code development tools can benefit it". search cloud applications. techtarget magazine. archived from the original on march . retrieved january . ^ rouse, margaret. "low-code/no-code development platform (lcnc platform)". www.techtarget.com. retrieved august . ^ woods, dan. "when no code makes sense for legacy app migration". www.forbes.com. retrieved august . external links[edit] pattani, aneri ( november ) "a coding revolution in the office cube sends message of change to it". cnbc. retrieved november . retrieved from "https://en.wikipedia.org/w/index.php?title=low-code_development_platform&oldid= " categories: enterprise architecture software development hidden categories: cs maint: archived copy as title navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages deutsch فارسی italiano עברית nederlands 日本語 polski 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement "the craap test" by sarah blakeslee home search browse collections my account about digital commons network™ skip to main content home about faq my account     home > community engagement > loex > loexquarterly > vol. > no. ( )   loex quarterly article title the craap test authors sarah blakeslee publication date fall recommended citation blakeslee, sarah ( ) "the craap test," loex quarterly: vol. : no. , article . available at: https://commons.emich.edu/loexquarterly/vol /iss / download downloads since january , share coins       follow journal home about this journal aims & scope editorial board policies submit article most popular papers notify me via email or rss select an issue: all issues vol. , no. vol. , no. / vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no. vol. , no.   enter search terms: select context to search: in this journal in this repository across all repositories advanced search     digital commons home | about | faq | my account | accessibility statement privacy copyright the thingology blog the librarything blog thingology monday, april th, new syndetics unbound feature: mark and boost electronic resources proquest and librarything have just introduced a major new feature to our catalog-enrichment suite, syndetics unbound, to meet the needs of libraries during the covid- crisis. our friends at proquest blogged about it briefly on the proquest blog. this blog post goes into greater detail about what we did, how we did it, and what efforts like this may mean for library catalogs in the future. what it does the feature, “mark and boost electronic resources,” turns syndetics unbound from a general catalog enrichment tool to one focused on your library’s electronic resources—the resources patrons can access during a library shutdown. we hope it encourages libraries to continue to promote their catalog, the library’s own and most complete collection repository, instead of sending patrons to a host of partial, third-party eresource platforms. the new feature marks the library’s electronic resources and “boosts,” or promotes, them in syndetics unbound’s discovery enhancements, such as “you may also like,” “other editions,” “tags” and “reading levels.” here’s a screenshot showing the feature in action. how it works the feature is composed of three settings. by default, they all turn on together, but they can be independently turned off and on. boost electronic resources chooses to show electronic editions of an item where they exist, and boosts such items within discovery elements. mark electronic resources with an “e” icon marks all electronic resources—ebooks, eaudio, and streaming video. add electronic resources message at top of page adds a customizable message to the top of the syndetics unbound area. “mark and boost electronic holdings” works across all enrichments. it is particularly important for “also available as” which lists all the other formats for a given title. enabling this feature sorts electronic resources to the front of the list. we also suggest that, for now, libraries may want to put “also available as” at the top of their enrichment order. why we did it your catalog is only as good as your holdings. faced with a world in which physical holdings are off-limits and electronic resources essential, many libraries have discouraged use of the catalog, which is dominated by non-digital resources, in favor of linking directly to overdrive, hoopla, freegal and so forth. unfortunately, these services are silos, containing only what you bought from that particular vendor. “mark and boost electronic resources” turns your catalog toward digital resources, while preserving what makes a catalog important—a single point of access to all library resources, not a vendor silo. maximizing your electronic holdings to make the best use of “mark and boost electronic resources,” we need to know about all your electronic resources. unfortunately, some systems separate marc holdings and electronic holdings; all resources appear in the catalog, but only some are available for export to syndetics unbound. other libraries send us holding files with everything, but they are unable to send us updates every time new electronic resources are added. to address this issue, we have therefore advanced a new feature—”auto-discover electronic holdings.” turn this on and we build up an accurate representation of your library’s electronic resource holdings, without requiring any effort on your part. adapting to change “mark and boost electronic resources” is our first feature change to address the current crisis. but we are eager to do others, and to adapt the feature over time, as the situation develops. we are eager to get feedback from librarians and patrons! — the proquest and librarything teams labels: new features, new product, syndetics unbound posted by tim @ : pm comments » share thursday, october th, introducing syndetics unbound short version today we’re going public with a new product for libraries, jointly developed by librarything and proquest. it’s called syndetics unbound, and it makes library catalogs better, with catalog enrichments that provide information about each item, and jumping-off points for exploring the catalog. to see it in action, check out the hartford public library in hartford, ct. here are some sample links: the raven boys by maggie stiefvater alexander hamilton by ron chernow faithful place by tana french we’ve also got a press release and a nifty marketing site. update: webinars every week! we’re now having weekly webinars, in which you can learn all about syndetics unbound, and ask us questions. visit proquest’s webex portal to see the schedule and sign up! long version the basic idea syndetics unbound aims to make patrons happier and increase circulation. it works by enhancing discovery within your opac, giving patrons useful information about books, movies, music, and video games, and helping them find other things they like. this means adding elements like cover images, summaries, recommendations, series, tags, and both professional and user reviews. in one sense, syndetics unbound combines products—the proquest product syndetics plus and the librarything products librarything for libraries and book display widgets. in a more important sense, however, it leaps forward from these products to something new, simple, and powerful. new elements were invented. static elements have become newly dynamic. buttons provide deep-dives into your library’s collection. and—we think—everything looks better than anything syndetics or librarything have done before! (that’s one of only two exclamation points in this blog post, so we mean it.) simplicity syndetics unbound is a complete and unified solution, not a menu of options spread across one or even multiple vendors. this simplicity starts with the design, which is made to look good out of the box, already configured for your opac and look. the installation requirements for syndetics unbound are minimal. if you already have syndetics plus or librarything for libraries, you’re all set. if you’ve never been a customer, you only need to add a line of html to your opac, and to upload your holdings. although it’s simple, we didn’t neglect options. libraries can reorder elements, or drop them entirely. we expect libraries will pick and choose, and evaluate elements according to patron needs, or feedback from our detailed usage stats. libraries can also tweak the look and feel with custom css stylesheets. and simplicity is cheap. to assemble a not-quite-equivalent bundle from proquest’s and librarything’s separate offerings would cost far more. we want everyone who has syndetics unbound to have it in its full glory. comprehensiveness and enrichments syndetics unbound enriches your catalog with some sixteen enrichments, but the number is less important than the options they encompass. these include both professional and user-generated content, information about the item you’re looking at, and jumping-off points to explore similar items. quick descriptions of the enrichments: boilterplate covers for items without covers. premium cover service. syndetics offers the most comprehensive cover database in existence for libraries—over million full-color cover images for books, videos, dvds, and cds, with thousands of new covers added every week. for syndetics unbound, we added boilerplate covers for items that don’t have a cover, which include the title, author, and media type. summaries. over million essential summaries and annotations, so patrons know what the book’s about. about the author. this section includes the author biography and a small shelf of other items by the author. the section is also adorned by a small author photo—a first in the catalog, although familiar elsewhere on the web. look inside. includes three previous syndetics enrichments—first chapters or excerpts, table of contents and large-size covers—newly presented as a “peek inside the book” feature. series. shows a book’s series, including reading order. if the library is missing part of the series, those covers are shown but grayed out. you may also like. provides sharp, on-the-spot readers advisory in your catalog, with the option to browse a larger world of suggestions, drawn from librarything members and big-data algorithms. in this and other enrichments, syndetics unbound only recommends items that your library owns. the syndetics unbound recommendations cover far more of your collection than any similar service. for example, statistics from the hartford public library show this feature on % of items viewed. professional reviews includes more than . million reviews from library journal, school library journal, new york times, the guardian, the horn book, booklist, bookseller + publisher magazine, choice, publisher’s weekly, and kirkus. a la carte review sources include voice of youth advocates: voya, doody’s medical reviews and quill and quire. reader reviews includes more than . million vetted, reader reviews from librarything members. it also allows patrons and librarians to add their own ratings and reviews, right in your catalog, and then showcase them on a library’s home page and social media. also available as helps patrons find other available formats and versions of a title in your collection, including paper, audio, ebook, and translations. exploring the tag system tags rethinks librarything’s celebrated tag clouds—redesigning them toward simplicity and consistency, and away from the “ransom note” look of most clouds. as data, tags are based on over million tags created by librarything members, and hand-vetted by our staff librarians for quality. a new exploration interface allows patrons to explore what librarything calls “tag mashes”—finding books by combinations of tags—in a simple faceted way. i’m going to be blogging about the redesign of tag clouds in the near future. considering dozens of designs, we decided on a clean break with the past. (i expect it will get some reactions.) book profile is a newly dynamic version of what bowker has done for years—analyzing thousands of new works of fiction, short-story collections, biographies, autobiographies, and memoirs annually. now every term is clickable, and patrons can search and browse over one million profiles. explore reading levels reading level is a newly dynamic way to see and explore other books in the same age and grade range. reading level also includes metametrics lexile® framework for reading. click the “more” button to get a new, super-powered reading-level explorer. this is one my favorite features! (second and last exclamation point.) awards highlights the awards a title has won, and helps patrons find highly-awarded books in your collection. includes biggies like the national book award and the booker prize, but also smaller awards like the bram stoker award and oklahoma’s sequoyah book award. browse shelf gives your patrons the context and serendipity of browsing a physical shelf, using your call numbers. includes a mini shelf-browser that sits on your detail pages, and a full-screen version, launched from the detail page. video and music adds summaries and other information for more than four million video and music titles including annotations, performers, track listings, release dates, genres, keywords, and themes. video games provides game descriptions, esrb ratings, star ratings, system requirements, and even screenshots. book display widgets. finally, syndetics unbound isn’t limited to the catalog, but includes the librarything product book display widgets—virtual book displays that go on your library’s homepage, blog, libguides, facebook, twitter, pinterest, or even in email newsletters. display widgets can be filled with preset content, such as popular titles, new titles, dvds, journals, series, awards, tags, and more. or you point them at a web page, rss feed, or list of isbns, upcs, or issns. if your data is dynamic, the widget updates automatically. here’s a page of book display widget examples. find out more made it this far? you really need to see syndetics unbound in action. check it out. again, here are some sample links of syndetics unbound at hartford public library in hartford, ct: the raven boys by maggie stiefvater, alexander hamilton by ron chernow, faithful place by tana french. webinars. we hold webinars every tuesday and walk you through the different elements and answer questions. to sign up for a webinar, visit this webex page and search for “syndetics unbound.” interested in syndetics unbound at your library? go here to contact a representative at proquest. or read more about at the syndetics unbound website. or email us at ltflsupport@librarything.com and we’ll help you find the right person or resource. labels: librarything for libraries, new feature, new features, new product posted by tim @ : am comments » share thursday, january th, alamw in boston (and free passes)! abby and kj will be at ala midwinter in boston this weekend, showing off librarything for libraries. since the conference is so close to librarything headquarters, chances are good that a few other lt staff members may appear, as well! visit us. stop by booth # to meet abby & kj (and potential mystery guests!), get a demo, and learn about all the new and fun things we’re up to with librarything for libraries, tinycat, and librarything. get in free. are you in the boston area and want to go to alamw? we have free exhibit only passes. click here to sign up and get one! note: it will get you just into the exhibit hall, not the conference sessions themselves. labels: uncategorized posted by kate @ : pm comments » share thursday, june th, for ala : three free opac enhancements for a limited time, librarything for libraries (ltfl) is offering three of its signature enhancements for free! there are no strings attached. we want people to see how librarything for libraries can improve your catalog. check library. the check library button is a “bookmarklet” that allows patrons to check if your library has a book while on amazon and most other book websites. unlike other options, librarything knows all of the editions out there, so it finds the edition your library has. learn more about check library other editions let your users know everything you have. don’t let users leave empty-handed when the record that came up is checked out. other editions links all your holdings together in a frbr model—paper, audiobook, ebook, even translations. lexile measures put metametrics’ the lexile framework® for reading in your catalog, to help librarians and patrons find material based on reading level. in addition to showing the lexile numbers, we also include an interactive browser. easy to add ltfl enhancements are easy to install and can be added to every major ils/opac system and most of the minor ones. enrichments can be customized and styled to fit your catalog, and detailed usage reporting lets you know how they’re doing. see us at ala. stop by booth at ala annual this weekend in san francisco to talk to tim and abby and see how these enhancements work. if you need a free pass to the exhibit hall, details are in this blog post. sign up we’re offering these three enhancements free, for at least two years. we’ll probably send you links showing you how awesome other enhancements would look in your catalog, but that’s it. find out more http://www.librarything.com/forlibraries or email abby blachly at abby@librarything.com. labels: alaac , lexile measures, librarything for libraries, ltfl posted by abby @ : pm comments » share tuesday, june rd, ala in san francisco (free passes) our booth. but this is kate, not tim or abby. she had the baby. tim and i are headed to san francisco this weekend for the ala annual conference. visit us. stop by booth # to talk to us, get a demo, and learn about all the new and fun things we’re up to with librarything for libraries! stay tuned this week for more announcements of what we’ll be showing off. no, really. it’s going to be awesome. get in free. in the sf area and want to go to ala? we have free exhibit only passes. click here to sign up and get one. it will get you just into the exhibit hall, not the conference sessions themselves. labels: ala, alaac posted by abby @ : pm comments » share monday, february th, new “more like this” for librarything for libraries we’ve just released “more like this,” a major upgrade to librarything for libraries’ “similar items” recommendations. the upgrade is free and automatic for all current subscribers to librarything for libraries catalog enhancement package. it adds several new categories of recommendations, as well as new features. we’ve got text about it below, but here’s a short ( : ) video: what’s new similar items now has a see more link, which opens more like this. browse through different types of recommendations, including: similar items more by author similar authors by readers same series by tags by genre you can also choose to show one or several of the new categories directly on the catalog page. click a book in the lightbox to learn more about it—a summary when available, and a link to go directly to that item in the catalog. rate the usefulness of each recommended item right in your catalog—hovering over a cover gives you buttons that let you mark whether it’s a good or bad recommendation. try it out! click “see more” to open the more like this browser in one of these libraries: spokane county library district arapahoe public library waukegan public library cape may public library sails library network find out more find more details for current customers on what’s changing and what customizations are available on our help pages. for more information on librarything for libraries or if you’re interested in a free trial, email abby@librarything.com, visit http://www.librarything.com/forlibraries, or register for a webinar. labels: librarything for libraries, ltfl, recommendations, similar books posted by abby @ : pm comments » share thursday, february th, subjects and the ship of theseus i thought i might take a break to post an amusing photo of something i wrote out today: the photo is a first draft of a database schema for a revamp of how librarything will do library subjects. all told, it has tables. gulp. about eight of the tables do what a good cataloging system would do: distinguishes the various subject systems (lcsh, medical subjects, etc.) preserves the semantic richness of subject cataloging, including the stuff that never makes it into library systems. breaks subjects into their facets (e.g., “man-woman relationships — fiction”) has two subject facets most of the tables, however, satisfy librarything’s unusual core commitments: to let users do their own thing, like their own little library, but also to let them benefit from and participate in the data and contributions of others.( ) so it: links to subjects from various “levels,” including book-level, edition-level, isbn-level and work-level. allows members to use their own data, or “inherit” subjects from other levels. allows for members to “play librarian,” improving good data and suppressing bad data.( ) allows for real-time, fully reversible aliasing of subjects and subject facets. the last is perhaps the hardest. nine years ago (!) i compared librarything to the “ship of theseus,” a ship which is “preserved” although its components are continually changed. the same goes for much of its data, although “shifting sands” might be a better analogy. accounting for this makes for some interesting database structures, and interesting programming. not every system at librarything does this perfectly. but i hope this structure will help us do that better for subjects.( ) weird as all this is, i think it’s the way things are going. at present most libraries maintain their own data, which, while generally copied from another library, is fundamentally siloed. like an evolving species, library records descend from each other; they aren’t dynamically linked. the data inside the records are siloed as well, trapped in a non-relational model. the profession that invented metadata, and indeed invented sharing metadata, is, at least as far as its catalogs go, far behind. eventually that will end. it may end in a “library goodreads,” every library sharing the same data, with global changes possible, but reserved for special catalogers. but my bet is on a more librarything-like future, where library systems will both respect local cataloging choices and, if they like, benefit instantly from improvements made elsewhere in the system. when that future arrives, we got the schema! . i’m betting another ten tables are added before the system is complete. . the system doesn’t presume whether changes will be made unilaterally, or voted on. voting, like much else, existings in a separate system, even if it ends up looking like part of the subject system. . this is a long-term project. our first steps are much more modest–the tables have an order-of-use, not shown. first off we’re going to duplicate the current system, but with appropriate character sets and segmentation by thesaurus and language. labels: cataloging, subjects posted by tim @ : pm comments » share tuesday, january th, librarything recommends in bibliocommons does your library use bibliocommons as its catalog? librarything and bibliocommons now work together to give you high-quality reading recommendations in your bibliocommons catalog. you can see some examples here. look for “librarything recommends” on the right side. not that kind of girl (daniel boone regional library) carthage must be destroyed (ottowa public library) the martian (edmonton public library) little bear (west vancouver memorial library) station eleven (chapel hill public library) the brothers karamazov (calgary public library) quick facts: as with all librarything for libraries products, librarything recommends only recommends other books within a library’s catalog. librarything recommends stretches across media, providing recommendations not just for print titles, but also for ebooks, audiobooks, and other media. librarything recommends shows up to two titles up front, with up to three displayed under “show more.” recommendations come from librarything’s recommendations system, which draws on hundreds of millions of data points in readership patterns, tags, series, popularity, and other data. not using bibliocommons? well, you can get librarything recommendations—and much more—integrated in almost every catalog (opac and ils) on earth, with all the same basic functionality, like recommending only books in your catalog, as well as other librarything for libraries feaures, like reviews, series and tags. check out some examples on different systems here. sirsidynix enterprise (saint louis public library) sirsidynix horizon information portal (hume libraries) sirsidynix elibrary (spokane county public library) iii encore (arapahoe public library) iii webpac pro (waukegan public library) polaris (cape may county library) ex libris voyager (university of wisconsin-eau claire) interested? bibliocommons: email info@bibliocommons.com or visit http://www.bibliocommons.com/augmentedcontent. see the full specifics here. other systems: email abby@librarything.com or visit http://www.librarything.com/forlibraries. labels: uncategorized posted by tim @ : pm comments » share thursday, october th, new: annotations for book display widgets our book display widgets is getting adopted by more and more libraries, and we’re busy making it better and better. last week we introduced easy share. this week we’re rolling out another improvement—annotations! book display widgets is the ultimate tool for libraries to create automatic or hand-picked virtual book displays for their home page, blog, facebook or elsewhere. annotations allows libraries to add explanations for their picks. some ways to use annotations . explain staff picks right on your homepage. . let students know if a book is reserved for a particular class. . add context for special collections displays. how it works check out the librarything for libraries wiki for instructions on how to add annotations to your book display widgets. it’s pretty easy. interested? watch a quick screencast explaining book display widgets and how you can use them. find out more about librarything for libraries and book display widgets. and sign up for a free trial of either by contacting ltflsupport@librarything.com. labels: book display widgets, librarything for libraries, new feature, new features, widgets posted by kj @ : am comments » share tuesday, october th, send us a programmer, win $ , in books. we just posted a new job post job: library developer at librarything (telecommute). to sweeten the deal, we are offering $ , worth of books to the person who finds them. that’s a lot of books. rules! you get a $ , gift certificate to the local, chain or online bookseller of your choice. to qualify, you need to connect us to someone. either you introduce them to us—and they follow up by applying themselves—or they mention your name in their email (“so-and-so told me about this”). you can recommend yourself, but if you found out about it from someone else, we hope you’ll do the right thing and make them the beneficiary. small print: our decision is final, incontestable, irreversible and completely dictatorial. it only applies when an employee is hired full-time, not part-time, contract or for a trial period. if we don’t hire someone for the job, we don’t pay. the contact must happen in the next month. if we’ve already been in touch with the candidate, it doesn’t count. void where prohibited. you pay taxes, and the insidious hidden tax of shelving. employees and their families are eligible to win, provided they aren’t work contacts. tim is not. » job: library developer at librarything (telecommute) labels: jobs posted by tim @ : am comment » share page of ... ...»last » thingology is librarything's ideas blog, on the philosophy and methods of tags, libraries and suchnot. the librarything blog rss feed combined feed search for: recent posts new syndetics unbound feature: mark and boost electronic resources introducing syndetics unbound alamw in boston (and free passes)! for ala : three free opac enhancements ala in san francisco (free passes) recent comments máy phun phân bón on the librarything programming quiz! janis jones on book display widgets from librarything for libraries marie seltenrych on book display widgets from librarything for libraries tye bishop on introducing thingisbn walter clark on new group: “books in —the future of the book world” archives april october january june february january october june may april march december october september june april march february january november october august june april march february january november october september august july june may april march february january december november october august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may categories signals aaron swartz academics ads ahml ala ala ala anaheim ala midwinter ala ala ala ala alaac alamw alamw alamw alamw aleph alexandria egypt amazon amusement android apis app arl arlington heights armenian astroturfing ato attention australia australian australian tax office authenticity awards barcode scanning bea ben franklin berkman center bhutan biblios bigwig blogging book blogs book covers book display widgets book reviews bookpsychic books bookstores booksurge boston bowdoin bowker branded apps brigadoon library britney spears business c.s. lewis canton cataloging categories censorship charleston chick lit chris catalfo christmas cig cil cil cil cil cil city planning claremont colleges clay shirky cluetrain code codi cognitive cost collection development commemorations common knowledge communiation computers in libraries conference conferencething contests copyright covers coverthing crime csuci curiosities cutter danmarc david weinberger ddc dead or alive department of commerce department of defense department of labor dewey decimal dewey decimal classification discovery layer django doc searls dr. horrible drm durham early reviewers east brunswick ebooks ebpl ebscohost economics elton john email employment enhancement enterprise ereaders erotica event evergreen everything is miscellaneous exlibris facebook fake steve federal libraries feedback flash-mob cataloging folksonomy frbr freedom fun future of cataloging future of the book gbs gene smith getting real giraffe gmilcs google google book search groups guardian harry potter harvard coop hidden images hiring homophily houghton mifflin humor ibistro iii il indexing indiebound inspiration instruction international internet archive internet librarians internships interviews iphone app isbns it conversations itt tallaght jacob nielsen jason griffey javascript jeff atwood jobs json kelly vista kils kindle kingston koha languages lccn lccns lcsh legacies legacy libraries legacy mob lexile measures lianza lianza lib . liblime librarians libraries libraries of the dead library . gang library anywhere library blogging library journal library of congress library of congress report library of the futurue library science library technology librarycampnyc librarything librarything for libraries librarything for publishers librarything local linden labs lis los gatos lter ltfl ltfl categories ltfl libraries ltfl reviews maine marc marcthing marié digby mashups masonic control masons meet-up metadata metasexdactyly michael gorman michael porter microsoft microsoft songsmith mike wesch milestone mobile mobile catalog mobile web monopoly moose movers and shakers nc ncsu neil gaiman nela new feature new features new product newspapers nipply nook north carolina oclc oclc numbers oh opacs open data open library open shelves classification open source openness osc overdrive paid memberships palinet pay what you want physical world pla pla pla podcasts policy politics polls portland portland public library power laws print culture profile pictures qr code ra radiohead randolph county public library rcpl readers advisory reading recommendations reloadevery remixability reviews rhinos richland county rights riverine metaphors roy tennant rusa mars safe for work if you're a cataloger san francisco state university santathing scanning schaufferwaffer screencasts seattle public library second life secret santas serendipity series sfsu shelf browse shelfari shirky similar books sincerity sirsidynix slco small libraries social cataloging social media social networking songsmith sony reader sopac spam stack map stats steve lawson strangeness subjects syndetics unbound syria tag mirror tagging tagmash tags talis talks tax exemption the thingisbn tim tipping points tools translation twitter uclassify ugc uncategorized usability user generated content users utnapishtim vc vertical social networks very short list visualizations voyager vufind web web . webinars weddings weinberger west virginia westlaw widgets wikimania wikimania wirral wirral libraries work disambiguation working group on the future of bibliographic control works worldcat worldcat local xisbn youtube zombies zoomii meta register log in entries rss comments rss wordpress.org help/faqs | about | privacy/terms | blog | contact | apis | wikithing | common knowledge | legacy libraries | early reviewers | zeitgeist copyright librarything and/or members of librarything, authors, publishers, libraries, cover designers, amazon, bol, bruna, etc. the clash of the wolves - wikipedia the clash of the wolves from wikipedia, the free encyclopedia jump to navigation jump to search film the clash of the wolves lobby card directed by noel m. smith written by charles logue story by charles logue starring rin tin tin charles farrell june marlowe cinematography edwin b. dupar allen thompson joseph walker edited by clarence kolster production company warner bros. distributed by warner bros. release date november  ,   ( - - ) running time minutes country united states language silent (english intertitles) budget $ , [ ] box office $ , [ ] play media the full film the clash of the wolves is a american silent western/adventure film produced and distributed by warner bros. directed by noel m. smith, the film stars canine actor rin tin tin, charles farrell and june marlowe. it was filmed on location in chatsworth, california, and at what would later become the joshua tree national park.[ ] it was transferred onto mm film by associated artists productions[ ] in the s and shown on television. a mm print of the film was discovered in south africa and restored in . in , the clash of the wolves was deemed "culturally, historically, or aesthetically significant" by the united states library of congress and selected for preservation in the national film registry.[ ][ ] contents plot cast reviews and reception . box office preservation status accolades references external links plot[edit] lobo, wolfdog leader of a wolf pack, has a price on his head. one day suffering from a thorn in his paw, he is found by dave weston, a borax prospector and befriended. the animal returns love and loyalty. later lobo saves dave from attacks of scheming villain william 'borax' horton, who has designs on dave's claim. once again the villain attacks the young prospector and leaves him for dead on the site of the claim. lobo arrives and dave sends him with a message to town for help. in the meantime a posse is hunting lobo, but he manages to escape them and at the same time, decoy them to dave. there, they learn that lobo is man's best friend. cast[edit] rin tin tin as lobo nanette as lobo's mate charles farrell as dave weston june marlowe as may barstowe heinie conklin as alkali bill will walling as sam barstowe pat hartigan as william 'borax' horton reviews and reception[edit] michael l. simmons wrote in the exhibitors trade review, that "he (rin-tin-tin) brings to the role of leader of a wolf-pack, an intelligence, a beauty of motion, an impressive cleverness that should find wide favor. he is a spectacle, in my opinion, well worth the price of admission." simmons went on to say that "it is obvious throughout; every time the human cast stacks up alongside the exploits of the animal players, the latter stands out far ahead in the ability to compel interest."[ ] motion picture news reviewer george t. pardy praised the performance of rin-tin-tin, saying; "his work all through is extraordinary and far above that of his average doggish contemporaries in filmland...the thrills are many and pungent, mostly arising from the endeavors to trap or shoot lobo of folks who know that there is a price set on the head of the kingly wolf."[ ] a review in the film daily was critical of the film stating, "no doubt the author is chiefly to blame for furnishing a script that is a mixture of dizzy melodrama, burlesque, caricature - anything in fact far removed from reality. director noel smith struggled bravely with it. he deserves credit for getting over the dog sequences with a snap and a punch. the rest of the weak story seemed to have him licked."[ ] box office[edit] according to warner bros records the film earned $ , domestically and $ , foreign.[ ] preservation status[edit] a mm projection print of the clash of the wolves was found in south africa and returned to the united states. it underwent restoration and preservation in .[ ][ ] abridged and full versions survive in the library of congress packard campus for audio-visual conservation.[ ] accolades[edit] in , the clash of the wolves was deemed "culturally, historically, or aesthetically significant" by the united states library of congress and selected for preservation in the national film registry.[ ][ ] references[edit] ^ a b c warner bros financial information in the william shaefer ledger. see appendix , historical journal of film, radio and television, ( ) :sup , - p doi: . / ^ "clash of the wolves". silentera.com. retrieved april , . ^ movies from aap warner bros features & cartoons sales book directed at tv ^ "librarian of congress adds films to national film registry". library of congress, washington, d.c. usa. retrieved april , . ^ a b "complete national film registry listing | film registry | national film preservation board | programs at the library of congress | library of congress". library of congress, washington, d.c. usa. retrieved june , . ^ michael l. simmons (november ). "the clash of the wolves". exhibitors trade review. ( ): . ^ george t. pardy (november ). "the clash of the wolves". motion picture news. ( ): . ^ "the clash of the wolves". the film daily. ( ): . november . ^ "clash of the wolves (motion picture)". library of congress. retrieved november , . ^ "clash of the wolves". ucla film and television archive. retrieved november , . ^ catalog of holdings the american film institute collection and the united artists collection at the library of congress, p. , c. the american film institute ^ "news from the library of congress". loc.gov. retrieved july , . external links[edit] wikimedia commons has media related to the clash of the wolves. the clash of the wolves essay by susan orlean at national film registry [ ] the clash of the wolves essay by daniel eagan in america's film legacy: the authoritative guide to the landmark movies in the national film registry, a&c black, isbn  , pages - [ ] the clash of the wolves at imdb the story of rin-tin-tin is available for free download at the internet archive the clash of the wolves at the american film institute catalog the clash of the wolves at the tcm movie database lobby cards and still at silenthollywood.com retrieved from "https://en.wikipedia.org/w/index.php?title=the_clash_of_the_wolves&oldid= " categories: films s romance films s western (genre) adventure films united states national film registry films american films american romance films american silent feature films american western (genre) adventure films american black-and-white films films about dogs films shot in california warner bros. films s rediscovered films rin tin tin rediscovered american films surviving american silent films hidden categories: use mdy dates from september articles with short description short description is different from wikidata template film date with release date commons category link is on wikidata articles with internet archive links navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages bahasa indonesia italiano bahasa melayu nederlands polski edit links this page was last edited on april , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement ft alphaville’s electric vehicle bubble watch v - google sheets javascript isn't enabled in your browser, so this file can't be opened. enable and reload. ft alphaville’s electric vehicle bubble watch v        share sign in the version of the browser you are using is no longer supported. please upgrade to a supported browser.dismiss file edit view insert format data tools form add-ons help accessibility unsaved changes to drive see new changes                     accessibility     view only               a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai ft alphaville's electric vehicle bubble watch once you pop, you can't stop latest update . . - added: evgo. earnings from: ride, arvl, ptra, blnk, solo, nio, beem, xl, hyzn, gpv company name ticker spac? price week high % from high annual return -day return -day return daily change shares (m) market cap ($m) net debt ($m) enterprise value ($m) ttm revenues ($m) ttm ebitda ($m) ev/ sales rev growth fy revenues ($m) ev/ ' sales implied sales cagr gross margin ttm capex ($m) ttm r&d ($m) peak market cap ($m) notes manufacturers tesla tsla no . . - . % . % . % . % - . % . , - , , , , . % , . % % , , , nio nio no . . - . % . % . % - . % - . % , . , - , , , - . % , . % % , nikola nkla yes . . - . % - . % - . % - . % - . % . , - , - , n/a . % n/a , xpeng xpev no . . - . % n/a . % . % - . % . , - , , , - . % , . % . % , arrival arvl yes . . - . % n/a - . % - . % - . % . , - , - n/a n/a . n/a n/a , arcimoto fuv no . . - . % . % . % . % - . % . - - . % . % - % , li auto li no . . - . % . % . % - . % - . % . , - , , , - . % , . % % , canoo goev yes . . - . % - . % - . % - . % - . % . , - , - . n/a . % % , fisker fsr yes . . - . % . % . % - . % - . % . , - , - , n/a . % % , lordstown ride yes . . - . % - . % - . % - . % - . % . - n/a n/a , . n/a n/a , workhorse group wkhs no . . - . % - . % . % - . % - . % . , - , - . % . % n/a , electrameccanica solo no . . - . % . % . % - . % - . % . - - . % . % - % , lion electric lev yes . . - . % n/a - . % - . % - . % . , , - . n/a . % n/a , electric last mile elms yes . . - . % n/a - . % - . % . % . , - , n/a n/a . n/a n/a n/a n/a , lightning emotors zev yes . . - . % n/a . % . % - . % . - . n/a . % - % , kandi technologies kndi no . . - . % - . % - . % - . % - . % . - - . - % n/a n/a n/a % , greenpower motor gpv.to no . . - . % . % - . % - . % - . % . - - . - % . % % . proterra ptra yes . . - . % n/a - . % - . % - . % . , - , - . n/a . % % , byd co byddy no . . - . % . % . % . % - . % , . , , , , , . % , . % % , , , niu niu no . . - . % . % - . % - . % - . % . , - , . % . % % , faraday future ffie yes . . - . % n/a - . % - . % - . % . , - , n/a . n/a n/a n/a n/a , hyzon corp hyzn yes . . - . % n/a - . % - . % . % . , . , - n/a . n/a n/a n/a , lucid motors lcid yes . . - . % n/a . % . % - . % , . , - , , n/a , . n/a n/a n/a n/a , charging blink charging blnk no . . - . % . % . % . % - . % . , - , - . % . % % n/a , evbox group tpgy yes . . - . % n/a - . % - . % - . % . , - , n/a . % . % % n/a n/a , nuvve nvve yes . . - . % n/a n/a . % - . % . - n/a . % . . % % n/a n/a chargepoint chpt yes . . - . % n/a . % . % - . % . , , - . % . % % - , beam global beem no . . - . % . % . % . % . % . - - . % . % - % . . fastned fast.as no . . - . % . % . % . % . % . , - , - . % . % % n/a , compleo charging c m.f no . . - . % n/a . % . % - . % . - - . % . % % volta snpr yes . . - . % n/a n/a . % - . % . , - n/a n/a . n/a n/a n/a n/a , alfen alfen.as no . . - . % . % . % . % - . % . , - , . % . % % . n/a , evgo evgo yes . . - . % n/a - . % - . % - . % . - . % . % - % n/a , batteries/ cells plug power plug no . . - . % . % . % - . % - . % . , - , - - - . n/a . % n/a , quantumscape qs yes . . - . % n/a - . % - . % - . % . , - , , n/a n/a n/a n/a n/a n/a , romeo power rmo yes . . - . % n/a - . % - . % - . % . - - . n/a . % - % , cbak energy technology cbat no . . - . % . % - . % - . % - . % . - . . % n/a n/a n/a % fuelcell energy fcel no . . - . % . % - . % - . % - . % . , - , . - . % . % - % , ballard power systems bldp no . . - . % . % . % - . % - . % . , - , , - . - % . % % , flux power flux no . . - . % n/a - . % . % - . % . - . % . % % freyr frey yes . . - . % n/a n/a n/a - . % . , , n/a n/a . n/a n/a n/a n/a , microvast mvst yes . . - . % n/a . % - . % . % . , - , - . n/a . % n/a n/a n/a , other hyliion holdings hyln yes . . - . % n/a - . % - . % - . % . , - , - n/a n/a . n/a n/a , xl fleet xl yes . . - . % n/a - . % - . % - . % . - - . % . % % , aggregate , , , , , . , . , , , , peak market capitalisation , , losses , % - . % traditional oems general motors gm . . - . % . % - . % - . % - . % , . , , , , , . % , . % % , , r&d is only given in annual reports so figure is for fy ford f . . - . % . % . % - . % - . % , . , , , , , . % , . % % , , stellantis stla . . - . % n/a . % . % . % , . , , , , , . n/a , . % % , , volkswagen vow .f . . - . % . % - . % - . % - . % . , , , , , . - % , . % % , , bmw bmw.f . . - . % . % - . % - . % - . % . , , , , , . % , . % % , , r&d/ capex is only given in annual reports so figure is for fy toyota tm . . - . % . % . % . % . % , . , - , , , , . % , . % % , , r&d is only given in annual reports so figure is for fy daimler dai.f . . - . % . % . % . % . % , . , , , , , . % , . % % , , honda hmc . . - . % . % . % . % - . % , . , - , , , , . % , . % % , , nissan nsany . . - . % . % . % . % - . % , . , - , , , , . % , . % % , , r&d is only given in annual reports so figure is for fy renault rno.pa . . - . % . % - . % . % - . % . , , , , . % , . % % , , aggregate , ** , , , , . , , . , , sources: google finance, capiq, refinitiv, company documents for spacs that have yet to merge, the figures presented are calculated from the pro forma figures presented in each spac"s investor presentation deck and will change once the mergers are completed. for the oems, net debt has been calculated as net debt + financing debt - financing receivables. **for japenese oems -- google keeps swtiching the market cap between jpy and usd values -- so if they look weird that's whats happened. fx rates cad/usd . eur/usd . cny/usd . jpy/usd . hkd/usd . quotes are not sourced from all markets and may be delayed up to minutes. information is provided 'as is' and solely for informational purposes, not for trading purposes or advice.disclaimer       sheet     a browser error has occurred. please press ctrl-f to refresh the page and try again. a browser error has occurred. please hold the shift key and click the refresh button to try again. no-code development platform - wikipedia no-code development platform from wikipedia, the free encyclopedia jump to navigation jump to search software development services no-code development platform (ncdps) allow programmers and non-programmers to create application software through graphical user interfaces and configuration instead of traditional computer programming. no-code development platforms are closely related to low-code development platforms as both are designed to expedite the application development process. these platforms have both increased in popularity as companies deal with the parallel trends of an increasingly mobile workforce and a limited supply of competent software developers.[ ] a typical low-code development environment consists of these characteristics: drag and drop interfaces allow for easy development processes. a visual modelling tool that allows you to create the uis, data models, and functionality with the option to add in hand-written code when needed. connectors that handle the data structures, retrieval, and storage. out of the box functionality allows you to skip having to build your core modules from scratch, and instead focus on building new code. automated application lifecycle manager that allows for building, deploying, debugging, and the staging and production process. while low-code tools generally follow these guidelines, no two low-code tools are alike, and they are all designed to cater to specific functionality. platforms vary widely in their functionality, integrations, and market niche. some applications may focus solely on a specific business function such as data capture or workflow while others may seek to integrate entire enterprise resource planning tools into a mobile form factor.[ ] no-code development platforms are closely related to visual programming languages.[ ] contents use no-code vs. low-code security concerns criticisms notable no-code development platforms see also references external links use[edit] ncdps are used to meet the needs of companies that are seeking to digitize processes through cloud-based mobile applications. no-code tools are often designed with line of business users in mind as opposed to traditional it. this shift in focus is meant to help accelerate the development cycle by bypassing traditional it development constraints of time, money, and scarce software development human capital resources to allow teams to align their business strategy with a rapid development process.[ ] ncdps also often leverage enterprise-scale apis and web service catalogs, open data sets, and tested and proven template galleries, to help integrate existing business systems while adding a practical layer of user functionality. [ ] the transition from traditional enterprise software to a lean development methodology is also changing the role of traditional it leaders and departments. whereas it once provided not only approval of new technology but procurement and development of new tools, it's role is now increasingly one of governance over line of business who develop niche tools for their work stream.[ ] the potential benefits of utilizing a ncdp include: access - by , it has been estimated that over than half of all b e (business-to-employee) mobile apps would be created by enterprise business analysts using codeless tools. this ongoing shift is increasing the number of potential app creators from individuals with coding skills to anyone with internet access and functional business acumen. [ ] agility - ncdps typically provide some degree of templated user-interface and user experience functionality for common needs such as forms, workflows, and data display allowing creators to expedite parts of the app creation process. [ ] richness - ncdps which at one point were limited to more basic application functions increasingly provide a level of feature-richness and integrations that allows users to design, develop, and deploy apps that meet specific business needs. [ ] the common worker is becoming busier/working longer hours on average [ ] and with the proliferation of low code software tools and more access to business apis, there is a clear opportunity for workers to automate their current tasks using these new no-code development platforms. there are thousands of platforms out there with enterprises, on average, having deployed over different saas apps in their business. no-code vs. low-code[edit] the distinction between no-code and low-code development platforms can seem blurry depending on the nature of an app platform's full set of functionality. however, there are a number of key distinctions that set apart the design and use cases for each type of platform. app creator - no-code platforms are accessible to any end-business user while low-code platforms require developers with knowledge of coding languages who can work within a platform's constraints to streamline the development process. core design - no-code platforms tend to function off a model-driven, declarative approach where the end user dictates an app's design through drag and drop manipulation or simple logic. low-code platforms often employ a similar development model with a greater dependence on hand coding for dictating an application's core architecture. user interface - no-code platforms most often rely on a preset user interface layer which simplifies and streamlines an app's design. low-code platforms may provide greater flexibility in ui options at the cost of additional coding requirements.[ ] security concerns[edit] as no-code development platforms continue to gain in popularity, concerns over platform security have risen as well, particularly for apps that handle consumer data. a common assumption is that ncdps are more vulnerable to security threats as these apps are often built by nontechnical business users. in reality, custom code is often a greater security risk than platform code which has been validated by its consistent use across multiple applications.[ ] no-code solutions allow platforms to hide what happens behind the scenes from users, so that end-users can change or modify a field without manipulating the functionality of the app and compromising security. [ ] you should check the following features of low code platforms on their website: platform security audits and compliance. single sign on and authentication. platform access control. application access control and audits. secure code using plugins. secure api endpoints. low-code platforms keep security at the forefront of their design process. they are able to consider all avenues of protection including application protection, data protection, policies and procedures, application protection, and infrastructure protection. this covers web and mobile apps as well as crm and erp systems. additionally, built-in security controls are able to automatically mitigate risks that derive from common vulnerabilities such as sql injection or cross-site scripting (xss). vulnerability can open up with the addition of custom code, so that it must always be considered carefully prior to its addition. criticisms[edit] skills gap - some it professionals have questioned whether empowering ordinary business users who cannot debug code is a sustainable endeavor. trend vs fad - ncdps have also been compared to other coding waves such as fourth-generation programming languages and rapid application development which promised to revolutionize software development.[ ] notable no-code development platforms[edit] airtable appery.io appsheet google brandcast bubble clappia creatio dadabik dronahq studio filemaker hypercard kintone monday.com podio pwct quickbase, inc. aquafadas salesforce.com lightning platform shopify silex website builder unqork webflow wix.com wordpress zapier see also[edit] flow-based programming list of online database creator apps low-code development platforms rapid application development lean software development platform as a service references[edit] ^ rouse, margaret. "low-code/no-code development platform (lcnc platform)". www.techtarget.com. retrieved august . ^ a b ciot, thierry. "what is a low-code/no-code platform?". www.cioreview.com. retrieved august . ^ https://spectrum.ieee.org/tech-talk/computing/software/programming-without-code-no-code-software-development ^ satell, greg. "the future of software is no-code". www.inc.com. retrieved august . ^ tolido, ron. "app maker movement". cap gemeni. retrieved december . ^ weiss, todd. "no-code, low-code development platforms help organizations meet growing app demand". www.itprotoday.com. retrieved august . ^ rivera, janessa. "gartner says by , more than percent of users will use a tablet or smartphone first for all online activities". gartner. retrieved january . ^ harris, richard. "low code and no code app development benefits". app developer magazine. retrieved january . ^ shrivastava, anubhuti. "how zero-code platforms are becoming a boon for enterprises". trend in tech. retrieved january . ^ shore, joel. "how no-code development tools can benefit it". www.techtarget.com. retrieved august . ^ rubinstein, david. "industry spotlight: no-code solutions help developers help themselves". sd times. retrieved december . ^ reselman, bob. "why the promise of low-code software platforms is deceiving". www.techtarget.com. forrester research. archived from the original on may . retrieved august . external links[edit] pattani, aneri ( november ) "a coding revolution in the office cube sends message of change to it". cnbc. retrieved november . retrieved from "https://en.wikipedia.org/w/index.php?title=no-code_development_platform&oldid= " categories: enterprise architecture software development hidden categories: articles with short description short description matches wikidata navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages deutsch עברית 日本語 português 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement dan cohen – vice provost, dean, and professor at northeastern university skip to the content search dan cohen vice provost, dean, and professor at northeastern university menu about blog newsletter podcast publications social media cv rss search search for: close search close menu about blog newsletter podcast publications social media cv rss what’s new podcast humane ingenuity newsletter blog publications © dan cohen powered by wordpress to the top ↑ up ↑ data unbound data unbound helping organizations access and share data effectively. special focus on web apis for data integration. some of what i missed from the cmd-d automation conference the cmd-d&# ;masters of automation one-day conference in early august would have been right up my alley: it’ll be a full day of exploring the current state of automation technology on both apple platforms, sharing ideas and concepts, and showing what’s possible—all with the goal of inspiring and furthering development of your own automation projects. fortunately, [&# ;] fine-tuning a python wrapper for the hypothes.is web api and other #ianno followup in anticipation of #ianno hack day, i wrote about my plans for the event, one of which was to revisit my own python wrapper for the nascent hypothes.is web api. instead of spending much time on my own wrapper, i spent most of the day working with jon udell&# ;s wrapper for the api. i&# ;ve been [&# ;] revisiting hypothes.is at i annotate i&# ;m looking forward to hacking on web and epub annotation at the #ianno hack day. i won&# ;t be at the i annotate conference per se but will be curious to see what comes out of the annual conference. i continue to have high hopes for digital annotations, both on the web and in non-web [&# ;] my thoughts about fargo.io using fargo.io organizing your life with python: a submission for pycon ? i have penciled into my calendar a trip  to montreal to attend pycon .   in my moments of suboptimal planning, i wrote an overly ambitious abstract for a talk or poster session i was planning to submit.  as i sat down this morning to meet the deadline for submitting a proposal for a poster [&# ;] current status of data unbound llc in pennsylvania i&# ;m currently in the process of closing down data unbound llc in pennsylvania.  i submitted the paperwork to dissolve the legal entity in april and have been amazed to learn that it may take up to a year to get the final approval done.  in the meantime, as i establishing a similar california legal [&# ;] must get cracking on organizing your life with python talk and tutorial proposals for pycon are due tomorrow ( / ) .  i was considering submitting a proposal until i took the heart the appropriate admonition against &# ;conference-driven&# ; development of the program committee.   i will nonetheless use the oct and nov deadlines for lightning talks and proposals respectively to judge whether to [&# ;] embedding github gists in wordpress as i gear up i to write more about programming, i have installed the embed github gist plugin. so by writing &#x b;gist id= &#x d; in the text of this post, i can embed https://gist.github.com/rdhyee/ into the post to get: working with open data i&# ;m very excited to be teaching a new course working with open data at the uc berkeley school of information in the spring semester: open data — data that is free for use, reuse, and redistribution — is an intellectual treasure-trove that has given rise to many unexpected and often fruitful applications. in this [&# ;] a mundane task: updating a config file to retain old settings i want to have a hand in creating an excellent personal information manager (pim) that can be a worthy successor to ecco pro. so far, running eccoext (a clever and expansive hack of ecco pro) has been a eminently practical solution.   you can download the most recent version of this actively developed extension from [&# ;] a modest proposal - wikipedia a modest proposal from wikipedia, the free encyclopedia jump to navigation jump to search satirical essay by jonathan swift a modest proposal author jonathan swift genre satirical essay publication date a modest proposal for preventing the children of poor people from being a burthen to their parents or country, and for making them beneficial to the publick,[ ] commonly referred to as a modest proposal, is a juvenalian satirical essay written and published anonymously by jonathan swift in . the essay suggests that the impoverished irish might ease their economic troubles by selling their children as food to rich gentlemen and ladies. this satirical hyperbole mocked heartless attitudes towards the poor, as well as british policy toward the irish in general. in english writing, the phrase "a modest proposal" is now conventionally an allusion to this style of straight-faced satire. contents synopsis population solutions rhetoric influences . tertullian's apology . defoe's the generous projector . mandeville's modest defence of publick stews . john locke's first treatise of government economic themes . "people are the riches of a nation" the public's reaction modern usage notes references external links synopsis[edit] swift's essay is widely held to be one of the greatest examples of sustained irony in the history of the english language. much of its shock value derives from the fact that the first portion of the essay describes the plight of starving beggars in ireland, so that the reader is unprepared for the surprise of swift's solution when he states: "a young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and i make no doubt that it will equally serve in a fricassee, or a ragout."[ ] swift goes to great lengths to support his argument, including a list of possible preparation styles for the children, and calculations showing the financial benefits of his suggestion. he uses methods of argument throughout his essay which lampoon the then-influential william petty and the social engineering popular among followers of francis bacon. these lampoons include appealing to the authority of "a very knowing american of my acquaintance in london" and "the famous psalmanazar, a native of the island formosa" (who had already confessed to not being from formosa in ). in the tradition of roman satire, swift introduces the reforms he is actually suggesting by paralipsis: therefore let no man talk to me of other expedients: of taxing our absentees at five shillings a pound: of using neither clothes, nor household furniture, except what is of our own growth and manufacture: of utterly rejecting the materials and instruments that promote foreign luxury: of curing the expensiveness of pride, vanity, idleness, and gaming in our women: of introducing a vein of parsimony, prudence and temperance: of learning to love our country, wherein we differ even from laplanders, and the inhabitants of topinamboo: of quitting our animosities and factions, nor acting any longer like the jews, who were murdering one another at the very moment their city was taken: of being a little cautious not to sell our country and consciences for nothing: of teaching landlords to have at least one degree of mercy towards their tenants. lastly, of putting a spirit of honesty, industry, and skill into our shop-keepers, who, if a resolution could now be taken to buy only our native goods, would immediately unite to cheat and exact upon us in the price, the measure, and the goodness, nor could ever yet be brought to make one fair proposal of just dealing, though often and earnestly invited to it. therefore i repeat, let no man talk to me of these and the like expedients, 'till he hath at least some glympse of hope, that there will ever be some hearty and sincere attempt to put them into practice. population solutions[edit] george wittkowsky argued that swift's main target in a modest proposal was not the conditions in ireland, but rather the can-do spirit of the times that led people to devise a number of illogical schemes that would purportedly solve social and economic ills.[ ] swift was especially attacking projects that tried to fix population and labour issues with a simple cure-all solution.[ ] a memorable example of these sorts of schemes "involved the idea of running the poor through a joint-stock company".[ ] in response, swift's modest proposal was "a burlesque of projects concerning the poor"[ ] that were in vogue during the early th century. a modest proposal also targets the calculating way people perceived the poor in designing their projects. the pamphlet targets reformers who "regard people as commodities".[ ] in the piece, swift adopts the "technique of a political arithmetician"[ ] to show the utter ridiculousness of trying to prove any proposal with dispassionate statistics. critics differ about swift's intentions in using this faux-mathematical philosophy. edmund wilson argues that statistically "the logic of the 'modest proposal' can be compared with defence of crime (arrogated to marx) in which he argues that crime takes care of the superfluous population".[ ] wittkowsky counters that swift's satiric use of statistical analysis is an effort to enhance his satire that "springs from a spirit of bitter mockery, not from the delight in calculations for their own sake".[ ] rhetoric[edit] author charles k. smith argues that swift's rhetorical style persuades the reader to detest the speaker and pity the irish. swift's specific strategy is twofold, using a "trap"[ ] to create sympathy for the irish and a dislike of the narrator who, in the span of one sentence, "details vividly and with rhetorical emphasis the grinding poverty" but feels emotion solely for members of his own class.[ ] swift's use of gripping details of poverty and his narrator's cool approach towards them create "two opposing points of view" that "alienate the reader, perhaps unconsciously, from a narrator who can view with 'melancholy' detachment a subject that swift has directed us, rhetorically, to see in a much less detached way."[ ] swift has his proposer further degrade the irish by using language ordinarily reserved for animals. lewis argues that the speaker uses "the vocabulary of animal husbandry"[ ] to describe the irish. once the children have been commodified, swift's rhetoric can easily turn "people into animals, then meat, and from meat, logically, into tonnage worth a price per pound".[ ] swift uses the proposer's serious tone to highlight the absurdity of his proposal. in making his argument, the speaker uses the conventional, textbook-approved order of argument from swift's time (which was derived from the latin rhetorician quintilian).[ ] the contrast between the "careful control against the almost inconceivable perversion of his scheme" and "the ridiculousness of the proposal" create a situation in which the reader has "to consider just what perverted values and assumptions would allow such a diligent, thoughtful, and conventional man to propose so perverse a plan".[ ] influences[edit] scholars have speculated about which earlier works swift may have had in mind when he wrote a modest proposal. tertullian's apology[edit] james william johnson argues that a modest proposal was largely influenced and inspired by tertullian's apology: a satirical attack against early roman persecution of christianity. johnson believes that swift saw major similarities between the two situations.[ ] johnson notes swift's obvious affinity for tertullian and the bold stylistic and structural similarities between the works a modest proposal and apology.[ ] in structure, johnson points out the same central theme, that of cannibalism and the eating of babies as well as the same final argument, that "human depravity is such that men will attempt to justify their own cruelty by accusing their victims of being lower than human".[ ] stylistically, swift and tertullian share the same command of sarcasm and language.[ ] in agreement with johnson, donald c. baker points out the similarity between both authors' tones and use of irony. baker notes the uncanny way that both authors imply an ironic "justification by ownership" over the subject of sacrificing children—tertullian while attacking pagan parents, and swift while attacking the english mistreatment of the irish poor.[ ] defoe's the generous projector[edit] it has also been argued that a modest proposal was, at least in part, a response to the essay the generous projector or, a friendly proposal to prevent murder and other enormous abuses, by erecting an hospital for foundlings and bastard children by swift's rival daniel defoe.[ ] mandeville's modest defence of publick stews[edit] bernard mandeville's modest defence of publick stews asked to introduce public and state controlled bordellos. the paper acknowledges women's interests and – while not being a completely satirical text – has also been discussed as an inspiration for jonathan swift's title.[ ][ ] mandeville had by already become famous for the fable of the bees and deliberations on private vices and public benefits. john locke's first treatise of government[edit] john locke commented: "be it then as sir robert says, that anciently, it was usual for men to sell and castrate their children. let it be, that they exposed them; add to it, if you please, for this is still greater power, that they begat them for their tables to fat and eat them: if this proves a right to do so, we may, by the same argument, justifie adultery, incest and sodomy, for there are examples of these too, both ancient and modern; sins, which i suppose, have the principle aggravation from this, that they cross the main intention of nature, which willeth the increase of mankind, and the continuation of the species in the highest perfection, and the distinction of families, with the security of the marriage bed, as necessary thereunto". (first treatise, sec. ). economic themes[edit] robert phiddian's article "have you eaten yet? the reader in a modest proposal" focuses on two aspects of a modest proposal: the voice of swift and the voice of the proposer. phiddian stresses that a reader of the pamphlet must learn to distinguish between the satirical voice of jonathan swift and the apparent economic projections of the proposer. he reminds readers that "there is a gap between the narrator's meaning and the text's, and that a moral-political argument is being carried out by means of parody".[ ] while swift's proposal is obviously not a serious economic proposal, george wittkowsky, author of "swift's modest proposal: the biography of an early georgian pamphlet", argues that to understand the piece fully it is important to understand the economics of swift's time. wittowsky argues that not enough critics have taken the time to focus directly on the mercantilism and theories of labour in th century england. "if one regards the modest proposal simply as a criticism of condition, about all one can say is that conditions were bad and that swift's irony brilliantly underscored this fact".[ ] "people are the riches of a nation"[edit] at the start of a new industrial age in the th century, it was believed that "people are the riches of the nation", and there was a general faith in an economy that paid its workers low wages because high wages meant workers would work less.[ ] furthermore, "in the mercantilist view no child was too young to go into industry". in those times, the "somewhat more humane attitudes of an earlier day had all but disappeared and the laborer had come to be regarded as a commodity".[ ] louis a. landa composed a conducive analysis when he noted that it would have been healthier for the irish economy to more appropriately utilize their human assets by giving the people an opportunity to "become a source of wealth to the nation" or else they "must turn to begging and thievery".[ ] this opportunity may have included giving the farmers more coin to work for, diversifying their professions, or even consider enslaving their people to lower coin usage and build up financial stock in ireland. landa wrote that, "swift is maintaining that the maxim—people are the riches of a nation—applies to ireland only if ireland is permitted slavery or cannibalism"[ ] landa presents swift's a modest proposal as a critique of the popular and unjustified maxim of mercantilism in the th century that "people are the riches of a nation".[ ] swift presents the dire state of ireland and shows that mere population itself, in ireland's case, did not always mean greater wealth and economy.[ ] the uncontrolled maxim fails to take into account that a person who does not produce in an economic or political way makes a country poorer, not richer.[ ] swift also recognises the implications of this fact in making mercantilist philosophy a paradox: the wealth of a country is based on the poverty of the majority of its citizens.[ ] swift however, landa argues, is not merely criticising economic maxims but also addressing the fact that england was denying irish citizens their natural rights and dehumanising them by viewing them as a mere commodity.[ ] the public's reaction[edit] swift's essay created a backlash within the community after its publication. the work was aimed at the aristocracy, and they responded in turn. several members of society wrote to swift regarding the work. lord bathurst's letter intimated that he certainly understood the message, and interpreted it as a work of comedy: february – : "i did immediately propose it to lady bathurst, as your advice, particularly for her last boy, which was born the plumpest, finest thing, that could be seen; but she fell in a passion, and bid me send you word, that she would not follow your direction, but that she would breed him up to be a parson, and he should live upon the fat of the land; or a lawyer, and then, instead of being eat himself, he should devour others. you know women in passion never mind what they say; but, as she is a very reasonable woman, i have almost brought her over now to your opinion; and having convinced her, that as matters stood, we could not possibly maintain all the nine, she does begin to think it reasonable the youngest should raise fortunes for the eldest: and upon that foot a man may perform family duty with more courage and zeal; for, if he should happen to get twins, the selling of one might provide for the other. or if, by any accident, while his wife lies in with one child, he should get a second upon the body of another woman, he might dispose of the fattest of the two, and that would help to breed up the other. the more i think upon this scheme, the more reasonable it appears to me; and it ought by no means to be confined to ireland; for, in all probability, we shall, in a very little time, be altogether as poor here as you are there. i believe, indeed, we shall carry it farther, and not confine our luxury only to the eating of children; for i happened to peep the other day into a large assembly [parliament] not far from westminster-hall, and i found them roasting a great fat fellow, [walpole again] for my own part, i had not the least inclination to a slice of him; but, if i guessed right, four or five of the company had a devilish mind to be at him. well, adieu, you begin now to wish i had ended, when i might have done it so conveniently".[ ] modern usage[edit] a modest proposal is included in many literature courses as an example of early modern western satire. it also serves as an introduction to the concept and use of argumentative language, lending itself to secondary and post-secondary essay courses. outside of the realm of english studies, a modest proposal is included in many comparative and global literature and history courses, as well as those of numerous other disciplines in the arts, humanities, and even the social sciences.[original research?] the essay's approach has been copied many times. in his book a modest proposal ( ), the evangelical author francis schaeffer emulated swift's work in a social conservative polemic against abortion and euthanasia, imagining a future dystopia that advocates recycling of aborted embryos, fetuses, and some disabled infants with compound intellectual, physical and physiological difficulties. (such baby doe rules cases were then a major concern of the us anti-abortion movement of the early s, which viewed selective treatment of those infants as disability discrimination.) in his book a modest proposal for america ( ), statistician howard friedman opens with a satirical reflection of the extreme drive to fiscal stability by ultra-conservatives. in the edition of the handmaid's tale by margaret atwood there is a quote from a modest proposal before the introduction.[ ] a modest video game proposal is the title of an open letter sent by activist/former attorney jack thompson on october . he proposed that someone should "create, manufacture, distribute, and sell a video game" that would allow players to act out a scenario in which the game character kills video game developers.[ ][ ] hunter s. thompson's fear and loathing in america: the brutal odyssey of an outlaw journalist includes a letter in which he uses swift's approach in connection with the vietnam war. thompson writes a letter to a local aspen newspaper informing them that, on christmas eve, he is going to use napalm to burn a number of dogs and hopefully any humans they find. the letter protests against the burning of vietnamese people occurring overseas. the horror film butcher boys, written by the original the texas chain saw massacre scribe kim henkel, is said to be an updating of jonathan swift's a modest proposal. henkel imagined the descendants of folks who actually took swift up on his proposal. [ ] the film opens with a quote from j. swift. [ ] on november , jonathan swift's th birthday, the washington post published a column entitled "why alabamians should consider eating democrats' babies", by alexandra petri.[ ] in july , e. jean carroll published a book titled what do we need men for?: a modest proposal, discussing problematic behaviour of male humans.[ ][ ] on october , a satirist spoke up at an event for alexandria ocasio-cortez, claiming that a solution to the climate crisis was "we need to eat the babies".[ ] the individual also wore a t-shirt saying "save the planet, eat the children". this stunt was understood by many[ ] as a modern application of a modest proposal. notes[edit] ^ a b a modest proposal, by dr. jonathan swift. project gutenberg. july . retrieved january . ^ wittkowsky, swift’s modest proposal, p. ^ a b wittkowsky, swift’s modest proposal, p. ^ wittkowsky, swift's modest proposal, p. ^ wittkowsky, swift's modest proposal, p. ^ a b wittkowsky, swift's modest proposal, p. ^ wittkowsky, swift's modest proposal, p. ^ smith, toward a participatory rhetoric, p. ^ a b smith, toward a participatory rhetoric, p. ^ a b smith, toward a participatory rhetoric, p. ^ a b smith, toward a participatory rhetoric, p. ^ a b c johnson, tertullian and a modest proposal, p. ^ johnson, tertullian and a modest proposal, p. ^ baker, tertullian and swift's a modest proposal, p. ^ waters, juliet ( february ). "a modest but failed proposal". montreal mirror. archived from the original on july . retrieved january . ^ eine streitschrift…, essay von ursula pia jauch. carl hanser verlag, münchen . ^ primer, i. ( march ). bernard mandeville's "a modest defence of publick stews": prostitution and its discontents in early georgian england. springer. isbn  . ^ a b phiddian, have you eaten yet?, p. ^ phiddian, have you eaten yet?, p. ^ phiddian, have you eaten yet?, p. ^ a b landa, a modest proposal and populousness, p. ^ a b c d e landa, a modest proposal and populousness, p. ^ swift, jonathan; scott, sir walter ( ). the works of jonathan swift: containing additional letters, tracts, and poems not hitherto published; with notes and a life of the author. a. constable. ^ atwood, margaret. "the handmaid's tale". www.goodreads.com. ^ saunderson, matt ( october ). "attorney proposes violent game". gamecube advanced. advanced media network. archived from the original on october . ^ gibson, ellie ( october ). "thompson refuses to make $ k donation to charity". eurogamer. retrieved march . ^ o'connell, joe. "a 'texas chain saw' pedigree". www.austinchronicle.com. ^ barton, steve. "exclusive: kim henkel talks butcher boys". www.dreadcentral.com. ^ petri, alexandra ( november ). "why alabamians should consider eating democrats' babies". the washington post. retrieved november . ^ carroll, e. jean ( june ). "donald trump assaulted me, but he's not alone on my list of hideous men". the cut. retrieved october . ^ "what do we need men for?: a modest proposal | indiebound.org". www.indiebound.org. retrieved october . ^ ‘we need to eat the babies!’ climate activist confronts aoc at new york town hall, retrieved october ^ malaea, marika ( october ). "'eat the babies!': twitter reacts to a surprise ending to the alexandria ocasio-cortez town hall meeting". newsweek. retrieved october . references[edit] baker, donald c ( ), "tertullian and swift's a modest proposal", the classical journal, : – johnson, james william ( ), "tertullian and a modest proposal", modern language notes, the johns hopkins university press, ( ): – , doi: . / , jstor  (subscription needed) landa, louis a ( ), "a modest proposal and populousness", modern philology, ( ): – , doi: . / phiddian, robert ( ), "have you eaten yet? the reader in a modest proposal", sel: studies in english literature – , rice university, ( ): – , doi: . / , hdl: / , jstor  smith, charles kay ( ), "toward a participatory rhetoric: teaching swift's modest proposal", college english, national council of teachers of english, ( ): – , doi: . / , jstor  wittkowsky, george ( ), "swift's modest proposal: the biography of an early georgian pamphlet", journal of the history of ideas, university of pennsylvania press, ( ): – , doi: . / , jstor  external links[edit] wikisource has original text related to this article: a modest proposal a modest proposal (celt) a modest proposal (gutenberg) a modest proposal – annotated text aligned to common core standards a modest proposal public domain audiobook at librivox a modest proposal bbc radio in our time with melvyn bragg 'a modest proposal for preventing the children of poor people from being a burthen to their parents or the country, and for making them beneficial to the publick. the third edition, dublin, printed: and reprinted at london, for weaver bickerton, in devereux-court near the middle-temple, . proposal to eat the children a short movie based upon swift's novel. v t e jonathan swift sermons sermons of jonathan swift satires meditation upon a broomstick ( ) a tale of a tub ( ) the battle of the books ( ) gulliver's travels ( – , ) essays thoughts on various subjects, moral and diverting ( ) an argument against abolishing christianity ( ) the conduct of the allies ( ) drapier's letters ( / ) a modest proposal ( ) directions to servants (published ) miscellany the examiner newspaper ( – ) "cadenus and vanessa" ( poem) "the lady's dressing room" ( poem) a complete collection of genteel and ingenious conversation ( ) a journal to stella (published posthumously – ) related esther johnson esther vanhomrigh scriblerus club swift crater the house that swift built ( film) v t e global human population major topics biocapacity demographics of the world human overpopulation malthusian catastrophe human population planning one-child policy two-child policy optimum population population population ethics population growth population momentum sustainable development sustainable population women's reproductive rights zero population growth biological and related topics family planning pledge two or fewer population biology population decline population density physiological density population dynamics population model population pyramid projections of population growth human impact on the environment deforestation desalination desertification environmental impact of agriculture of aviation of biodiesel of concrete of electricity generation of the energy industry of fishing of irrigation of mining of off-roading of oil shale industry of palm oil of paper of the petroleum industry of reservoirs of shipping of war industrialisation land degradation land reclamation overconsumption pollution quarrying urbanization loss of green belts urban sprawl waste water scarcity overdrafting population ecology deep ecology earth's energy budget food security habitat destruction i = p × a  × t kaya identity malthusian growth model overshoot (population) world energy consumption world energy resources world model literature a modest proposal observations concerning the increase of mankind, peopling of countries, etc. an essay on the principle of population "how much land does a man need?" operating manual for spaceship earth population control: real costs, illusory benefits the limits to growth the population bomb the skeptical environmentalist the ultimate resource publications population and environment population and development review lists population and housing censuses by country metropolitan areas by population population milestone babies events and organizations billion actions extinction rebellion international conference on population and development population action international population connection population matters population research institute united nations population fund voluntary human extinction movement world population day world population foundation related topics classic maya collapse fertility and intelligence green revolution holocene extinction migration commons human overpopulation human activities with impact on the environment human migration retrieved from "https://en.wikipedia.org/w/index.php?title=a_modest_proposal&oldid= " categories: essays by jonathan swift satirical essays pamphlets th-century essays works published anonymously british satire in great britain cannibalism in fiction books hidden categories: articles with short description short description is different from wikidata use british english from october use dmy dates from october all articles that may contain original research articles that may contain original research from february articles with librivox links ac with elements navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages brezhoneg català deutsch español فارسی français italiano עברית nederlands 日本語 norsk bokmål polski português Русский simple english suomi svenska 中文 edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement reflections on archival user studies | rhee | reference & user services quarterly journal content search search scope all authors title abstract index terms full text browse by issue by author by title other journals article tools indexing metadata how to cite item email this article (login required) email the author (login required) notifications view subscribe home about search current archives about rusa sections committees home > vol , no ( ) > rhee reflections on archival user studies hea lim rhee abstract this study is the first to focus on how developments in research trends, technology, and other factors have changed archival user studies. how have they changed in the past thirty years? how have they been conducted? this study examines and analyzes the us and canadian literature on archival user studies to trace their past, characterize their present, and uncover the issues and challenges facing the archival community in conducting user studies. it discusses findings and gives suggestions for further archival user studies. full text: html pdf references clark a. elliott, “citation patterns and documentation for the history of science: some methodological considerations,” american archivist (spring ): – . elsie t. freeman, “in the eye of the beholder: archives administration from the user’s point of view,” american archivist (spring ): . lawrence dowler, “the role of use in defining archival practice and principles: a research agenda for the availability and use of records,” american archivist , no. / (winter/spring ): – . jacqueline goggin, “the indirect approach: a study of scholarly users of black and women’s organizational records in the library of congress manuscript division,” midwestern archivist , no. ( ): – . fredric miller, “use, appraisal, and research: a case study of social history,” american archivist (fall ): – . lisa r. coats, “users of ead finding aids: who are they and are they satisfied?” journal of archival organization , no. ( ): – . richard pearce-moses, a glossary of archival and records terminology (chicago: society of american archivists, ), s.v. “finding aid,” also available online at http://www .archivists.org/glossary/terms/f/finding-aid. carolyn harris, “archives users in the digital era: a review of current research trends,” dalhousie journal of interdisciplinary management ( ): – . anneli sundqvist, “the use of records: a literature review,” archives & social studies , no. ( ): – . archival education and research institutes, “discussion: archival journal ranking,” , accessed may , , http://aeri .wetpaint.com/thread/ /archival+journal+ranking. lawrence dowler, “the role of use in defining archival practice and principles: a research agenda for the availability and use of records,” american archivist (winter/spring ): . richard h. lytle, “intellectual access to archives: i. provenance and content indexing methods of subject retrieval,” american archivist , no. (winter ): . clark a. elliott, “citation patterns and documentation for the history of science: some methodological considerations,” american archivist (spring ): – . elsie t. freeman, “in the eye of the beholder: archives administration from the user’s point of view,” american archivist (spring ): . jacqueline goggin, “the indirect approach: a study of scholarly users of black and women’s organizational records in the library of congress manuscript division,” midwestern archivist , no. ( ): – . fredric miller, “use, appraisal, and research: a case study of social history,” american archivist (fall ): – . paul conway, “facts and frameworks: an approach to studying the users of archives,” american archivist (fall ): – . mary jo pugh, providing reference services for archives & manuscripts (chicago: society of american archivists, ). helen r. tibbo, learning to love our users: a challenge to the profession and a model for practice, accessed october , , www.ils.unc.edu/tibbo/mac% spring% .pdf. james m. o’toole and richard j. cox, understanding archives & manuscripts (chicago: society of american archivists, ). bruce w. dearstyne, “what is the use of archives? a challenge for the profession,” american archivist (winter ): . helen r. tibbo, learning to love our users: a challenge to the profession and a model for practice, , accessed on october , , www.ils.unc.edu/tibbo/mac% spring% .pdf. hea lim rhee, “exploring the relationship between archival appraisal practice and user studies: u.s. state archives and records management programs” (phd diss., university of pittsburgh, ). morgan g. daniels and elizabeth yakel, “seek and you may find: successful search in online finding aid systems,” american archivist ( ): – . elizabeth yakel and deborah a. torres, “ai: archival intelligence and user expertise,” american archivist , no. (spring/summer ): – . elizabeth yakel and deborah a. torres, “genealogists as a ‘community of records,’” american archivist (spring/summer ): – . megan e. phillips, “usage patterns for holdings information sources at the university of north carolina at chapel hill manuscripts department” (master’s thesis, university of north carolina at chapel hill, ). shayera d. tangri, “evaluating changes in the methods by which users of the university of north carolina at chapel hill manuscripts department learn of the holdings of the department” (master’s thesis, university of north carolina at chapel hill, ). kris inwood and richard reid, “the challenge to archival practice of quantification in canadian history,” archivaria (autumn ): – . david bearman, “user presentation language in archives,” archives and museum informatics (winter – ): – . dianne l. beattie, “an archival user study: researchers in the field of women’s history,” archivaria (winter – ): – . wendy m. duff and catherine a. johnson, “a virtual expression of need: an analysis of e-mail reference questions,” american archivist (spring/summer ): – . anne j. gilliland-swetland, “an exploration of k– user needs for digital primary source materials,” american archivist (spring ): – . dianne l. beattie, “an archival user study: researchers in the field of women’s history,” archivaria (winter – ): – . helen r. tibbo, “primarily history in america: how u.s. historians search for primary materials at the dawn of the digital age,” american archivist (spring/summer ): – . jodi allison-bunnell, elizabeth yakel, and janet hauck, “researchers at work: assessing needs for content and presentation of archival materials,” journal of archival organization , no. ( ): – . ann d. gordon, using the nation’s documentary heritage: the report of the historical documents study (washington, dc: national historical publications and records commission in cooperation with the american council of learned societies, ). wendy m. duff and catherine a. johnson, “accidentally found on purpose: information-seeking behavior of historians in archives,” library quarterly , no. (october ): – . helen r. tibbo, “primary history: historians and the search for primary source materials” (proceedings presented at the acm ieee joint conference on digital libraries, july – , ), accessed february , , http://portal.acm.org/citation.cfm?doid= . . xiaomu zhou, “student archival research activity: an exploratory study,” american archivist (fall / winter ): – . wendy m. duff and catherine a. johnson, “accidentally found on purpose: information-seeking behavior of historians in archives,” library quarterly , no. (october ): – . wendy m. duff and catherine a. johnson, “where is the list with all the names? information-seeking behavior of genealogists,” american archivist (spring/summer ): – . kristina l. southwell, “how researchers learn of manuscript resources at the western history collections,” archival issues , no. ( ): – . barbara c. orbach, “the view from the researcher’s desk: historians’ perceptions of research and repositories,” american archivist (winter ): – elizabeth yakel, “listening to users,” archival issues , no. ( ): – . wendy duff, barbara craig, and joan cherry, “finding and using archival resources: a cross-canada survey of historians studying canadian history,” archivaria (fall ): – . wendy duff, barbara craig, and joan cherry, “historians’ use of archival sources: promises and pitfalls of the digital age,” public historian , no. ( ): – . michael e. stevens, “the historians and archival finding aids,” georgia archive (winter ): – . paul conway, “research in presidential libraries: a user survey,” midwestern archivist , no. ( ): - . christopher j. prom, “user interactions with electronic finding aids in a controlled setting,” american archivist , no. (fall/winter ): – . wendy scheir, “first entry: report on a qualitative exploratory study of novice user experience with online finding aids,” journal of archival organization , no. ( ): – . margaret o’neill adams, “analyzing archives and finding facts: use and users of digital data records,” archival science , no. ( ): – . elizabeth yakel and laura l. bost, “understanding administrative use and users in university archives,” american archivist ( ): – . elizabeth yakel, “listening to users,” archival issues , no. ( ): – . helen r. tibbo, “how historians locate primary resource materials: educating and serving the next generation of scholars” (paper presented at the acrl eleventh national conference charlotte, north carolina, ), www.ala.org/ala/mgrps/divs/acrl/events/pdf/tibbo.pdf. magia g. krause, “undergraduates in the archives: using an assessment rubric to measure learning,” american archivist (fall/winter ): – . elizabeth yakel, “listening to users,” archival issues , no. ( ): – . herbert clark, using language (new york: cambridge university press, ), . karen collins, “providing subject access to images: a study of user queries,” american archivist (spring ): – . wendy m. duff and joan m. cherry, “archival orientation for undergraduate students: an exploratory study of impact,” american archivist (fall/winter ): – kristina l. southwell, “how researchers learn of manuscript resources at the western history collections,” archival issues , no. ( ): – . paul conway, “research in presidential libraries: a user survey,” midwestern archivist , no. ( ): – . james bantin and leah agne, “digitizing for value: a user-based strategy for university archives,” journal of archival organization , no. – ( ): – . paul conway, partners in research; improving access to the nation’s archive. user studies of the national archives and records administration (pittsburgh: archives and museum informatics, ). paul conway, “modes of seeing: digitized photographic archives and the experienced user,” american archivist (fall/winter ): – . donghee sinn, “room for archives? use of archival materials in no gun ri research,” archival science ( ): – . mary jo pugh, “the illusion of omniscience: subject access and the reference archivist,” american archivist (winter ): – . wendy m. duff, “working as independently as possible: historians and genealogists meet the archival finding aid,” in the power and the passion of archives: a festschrift in honour of kent haworth, edited by reuben ware, marion beyea, and cheryl avery (ottawa: association of canadian archivists, ), . mary jo pugh, providing reference services for archives & manuscripts (society of american archivists: chicago, ) bruce washburn, ellen eckert, and merrilee proffitt, social media and archives: a survey of archive users (oclc research: dublin, ). christopher j. prom, “using web analytics to improve online access to archival resources,” american archivist ( ): – . elaine g. toms and wendy duff, “i spent _ hours sifting through one large box . . . diaries as information behavior of the archives user: lessons learned,” journal of the american society for information science and technology , no. (december ): – . catherine a. johnson and wendy m. duff, “chatting up the archivist: social capital and the archival researcher,” american archivist , no. (spring/summer ): – . nancy mccall and lisa a. mix, “scholarly returns: patterns of research in a medical archives,” archivaria (spring ): – . robert p spindler and richard peace-moses, “does amc mean ‘archives made confusing’? patron understanding of usmarc amc catalog records,” american archivist (spring ): – . helen r. tibbo, “how historians locate primary resource materials: educating and serving the next generation of scholars” (paper presented at the acrl eleventh national conference charlotte, north carolina, ), https://www.ala.org/ala/acrl/acrlevents/tibbo.pdf. margaret l. hedstrom et al., “’the old version flickers more’: digital preservation from the user’s perspective,” american archivist (spring/summer ): – . christopher j. prom, “using web analytics to improve online access to archival resources,” american archivist ( ): . kristen e. martin, “analysis of remote reference correspondence at a large academic manuscripts collection,” american archivist (spring-summer ): – . helen r. tibbo, “interviewing techniques for remote reference: electronic versus traditional environments,” american archivist (summer ): – . doi: https://doi.org/ . /rusq. n . refbacks there are currently no refbacks. ala privacy policy © rusa news: everybody hates chia, defi rugpull, china versus mining, china versus crypto – attack of the foot blockchain skip to content attack of the foot blockchain blockchain and cryptocurrency news and analysis by david gerard about the author attack of the foot blockchain: the book book extras business bafflegab, but on the blockchain buterin’s quantum quest dogecoin ethereum smart contracts in practice icos: magic beans and bubble machines imogen heap: “tiny human”. total sales: $ . index libra shrugged: how facebook tried to take over the money my cryptocurrency and blockchain commentary and writing for others press coverage: attack of the foot blockchain press coverage: libra shrugged table of contents the conspiracy theory economics of bitcoin the dao: the steadfast iron will of unstoppable code search for: main menu news: everybody hates chia, defi rugpull, china versus mining, china versus crypto nd may rd may - by david gerard - comments. you can support my work by signing up for the patreon — $ or $ a month is like a few drinks down the pub while we rant about cryptos once a month. it really does help. [patreon] the patreon also has a $ /month corporate tier — the number is bigger on this tier, and will look more impressive on your analyst newsletter expense account. [patreon] and tell your friends and colleagues to sign up for this newsletter by email! [scroll down, or click here] below: a screenshot of me in dead man’s switch, the documentary about the collapse of quadriga, looking like a skinhead thug crammed into a suit. “ew see guv oi fink we need a wurd abaht dese ‘criptose’ yuse gasbaggin on abaht”     chia petard this blog lives on hetzner cloud. i was already a happy customer, and now i’m an even happier one — hetzner’s told crypto miners and chia farmers to just bugger off. [twitter; twitter] rough translation: yes, it’s true, we have extended the terms and conditions and banned crypto mining. we have received many orders for our servers with large hard drives. however, large storage servers are increasingly being rented for this [mining]. this leads to problems with bandwidth on the storage systems. with chia mining, there is also the problem that the hard drives are extremely stressed by the many read and write cycles, and will break. the aggrieved replies from never-customers are the usual coinerism. particular points to the guy who thought that renting something meant he had the absolute right to thrash it to death. meanwhile, i’m told that bids for bulk storage on hetzner are already over three times what they were just a few months ago. [twitter] to “farm” chia, you plot and save as many bingo cards as you can. the system calls a number; if you have the right bingo card, you win. you compete by holding petabytes of bingo cards, and writing more as fast as possible. holding the bingo cards triples the price of large hard disks, and writing the bingo cards burns out an ssd in six weeks instead of ten years. chia was known to be ridiculous in , because it was stupidly obvious that amazon web services would beat all comers at masses of raw storage space. [dshr, ] in , amazon web services beats all comers at masses of storage space. so aws china had, for a short time, a page offering to rent you space for chia farming. the enterprising marketer responsible for this suggested using an “i . xlarge” high-i/o server ( cpus, gb ram, gbit networking) with a . tb ssd for plotting, and bulk storage on s to store your completed plots. they also gave basic set-up instructions for chia farming. the page disappeared in short order — but an archive exists. [the block; aws, archive, in chinese; coindesk] bram cohen, the inventor of chia, is striking back at the “fashionable fud” that chia trashes hard drives! he starts a twitter thread by saying that the claim is false — though by the end of the thread, he admits it’s true. but it’s your own fault for using consumer ssds, because the rule in crypto is always to blame the user. [twitter thread] never mind that the chia faq still tells people they can farm chia on a desktop computer (likely ssd), laptop (likely ssd) or a mobile phone. [chia, archive] decentralisation by “proof of x” will always be an engine of pointless destruction. it’s pointless because decentralisation always recentralises — because centralisation is more financially efficient. “decentralisation” only exists as a legal construct — “can’t sue me bro” — and not at all as a functional description. bitcoin mining is completely centralised. ethereum mining is completely centralised, and the network has a hard dependency on consensys. chia was launched centralised, because chia network recruited large crypto mining companies before launch. “decentralisation” is chasing a phantom. there is not a country’s worth of co value in “decentralisation” as a legal construct. saying that you can see hypothetical value in “decentralisation” is a grossly insufficient excuse for the destruction it observably results in. update: my piece on chia for foreign policy is just out!   the blockchain is just a human centipede for math — rstevens 🐳💨 (@rstevens) may ,   q. what do you call unsmokeable mushrooms? a. non-tokeable fungi nfts are the same grift as ico tokens, altcoins and bitcoin before them: invent a new form of magic bean, sell it for actual money. frequently the same grifters, too. if anyone ever tells you nfts are new: in , the french neo-avant-garde artist yves klein began dealing in what he declared to be zones of immaterial pictorial sensibility. in exchange for a sum of solid gold, klein would imbue a patch of thin air with his artistic aura and provide a receipt. one such “zone” was bequeathed to the los angeles county museum of art where it exists only as a photograph of the transaction, taken as the receipt was set ablaze and half the gold tossed into the seine. [guardian] there’s more detail on musician imogen heap’s planned nft collection. she’s going to use a ton of electricity, and pump out kilograms of carbon dioxide, to … buy carbon credits. [dezeen] the article says “this helped to remove a total of tonnes of carbon dioxide from the atmosphere.” this is true only for redefinitions of “remove” that are so weasely, you’d think they were on a blockchain. carbon credits don’t remove a damn thing, and it’s a lie to claim they do. it’s a bad excuse for doing a bad thing that shouldn’t be done in the first place. [greenpeace] dapper labs has been sued by rosen law firm on behalf of plaintiff jeeun friel and others, alleging that nba top shot moments are unregistered securites. dapper issued top shots, dapper runs the marketplace, dapper controls everything about the market, and the company is notoriously slow at giving people their payouts. coindesk has the complaint. [coindesk] dominic cummings, the poundshop rasputin at the heart of british politics for most of and , is offering to do an nft of documents he has to submit to a parliamentary committee. i’m honestly surprised he wasn’t deep in crypto already, and editing past blog posts to say how he gave satoshi the idea for bitgold. [ft, paywalled] newsy: nft art auctions have a piracy problem. includes bits from me. [wptv; youtube] metakovan’s b. token scheme sold shares in a pile of beeple nfts, including the $ million jpeg. it’s not going well so far. [artnet] (“non-tokeable fungi” joke courtesy ingvar mattson daniel dern.) [file ]   this is just to say i have attached an nft to the plums that were in the ice box (and which in many senses, still are) forgive me can you help me pay the electric bill — crowsa luxemburg (@quendergeer) april ,   it has been [ ] days since the last defi rugpull defi is a decentralised finance protocol running on the binance smart chain — a private permissioned ethereum instance run by popular crypto casino binance to attract defi to a blockchain that isn’t clogged to uselessness, the way the main ethereum public chain is. today, we are all defi , as the site puts up an important customer service message: “we scammed you guys and you cant do s—t about it  ha ha. all you moon bois have been scammed and you cant do s—t about it.” a heartwarming step up from just changing the site to the word “penis.” they took $ million in users’ cryptos with them. [defi , archive] reddit users on /r/defi warned about the protocol previously — “they are either a scam, or they are completely incompetent” — but the subreddit moderator took care to remove it. the subreddit has seen no posts in the past two months — or none that were left up, at least. [twitter; reddit] defi lender blockfi has self-rugpulled. in blockfi’s march promotion, the company ran a giveaway where they would give customers a few dollars’ bonus for sufficient trading volumes — but accidentally gave them a few bitcoins’ worth instead. blockfi reversed the transactions, but sent legal threats to customers who had already withdrawn the bonus. you don’t get that sort of customer service for free. [coindesk]   the business model of crypto is to provide a platform for crooks to scam muppets without running the risk of jail time. few understand this. https://t.co/vfeyosrkpe — trolly🐴 mctrollface 🌷🥀💩 (@tr llytr llface) may ,   bitcoin in the enterprise the health service executive in ireland has been ransomwared. [cnbc; rte] the same attack hit tusla, the irish child and family agency, whose systems are linked to the hse’s. [rte] toshiba tec, meanwhile, refuses to accept that crypto is the future of payments! the toshiba subsidiary told dark side, the ransomware gang that shut down colonial pipeline, to just bugger off. [cnbc] luddites in hospitals in waikato, new zealand also refuse to embrace bitcoin as a payment platform — they won’t be paying the ransomware either. [stuff] insurer axa halts new policies to reimburse ransomware payments in france — because the authorities told them to stop directly motivating ransomware. [abc news] how a cybernews reporter applied for a job with a ransomware gang. [cybernews] sophos has published its report on the state of ransomware. [sophos]   art. pic.twitter.com/ q urxwddv — kenneth finnegan (@kwf) may ,   i heard it on the blockchain good news for ethereum in the enterprise — it’s evolved beyond the capabilities of microsoft azure! azure blockchain as a service (baas) is shutting down in september . it apparently still has any customers; they’ve been invited to move to consensys. [zdnet; microsoft] is blockchain ready for the enterprise? lol, no. the bit on the decentralized identifiers standard is hilarious — they wrote the standard, but fobbed the hard bit off onto another standards committee. [blog post] enterprise blockchain was always a proxy for the price of bitcoin. it was a way of saying “bitcoin” without touching a bitcoin. this was cool when number was going up in , and not so cool when it wasn’t. not a single one of these projects ever had a use case. not a single one, ever, did any job better than existing technologies, rather than worse. “smart contracts” are far less impressive when you realise they’re literally just “database triggers” or “stored procedures,” and that there are excellent reasons we try not to write code right there in the database in actual enterprise computing.   using a blockchain when a database would work pic.twitter.com/uswvom p o — nathaniel whittemore (@nlw) may ,   baby’s on fire china has announced, yet again, that it’s kicking out the crypto miners — to “crack down on bitcoin mining and trading behavior, and resolutely prevent the transmission of individual risks to the social field,” according to the financial stability and development committee of china’s state council. the people’s bank of china hates crypto a whole lot, and the chinese government’s been trying to push the crypto miners out since — but perhaps they’ll make it stick this time. [twitter; reuters] inner mongolia has set up a hotline to report suspected crypto mining. this is the sort of regulation the world needs. inner mongolia has been told by beijing to cut its co production considerably, and they’d rather use their quota for steel and similar useful things. [twitter; sixth tone; ft, paywalled] i am told that miners are calling around trying to get the hell out of china asap. mining facilities are set up in containers to be moved around quickly, so this isn’t hard. the hard part is that bitcoin now uses as much electricity as the netherlands. where do you put a medium-sized country’s worth of electricity usage at short notice? bitcoin is green, hypothetically! in real life, here’s a gas-powered plant that restarted just to mine bitcoins. [architect’s newspaper, archive] the state of new york is suggesting banning bitcoin mining — new operations would be frozen, pending environmental review. [coindesk] bankers have heard bitcoiners’ concerns about all the energy banks use — the bank of italy notes specifically that the target instant payment settlement (tips) system uses one , th the energy of bitcoin. [bank of italy] the european central bank describes bitcoin’s “exorbitant carbon footprint” as “grounds for concern.” [ecb] the financial times has a detailed writeup of bitcoin’s dirty energy problem. [ft, paywalled] nicehash suspends all withdrawals due to a “security incident”. funds are safe, i’m sure. [nicehash] the wyoming blockchain grifters are at it again. david dodson says: burn electricity for wyoming’s prosperity! he quotes me saying “bitcoin is literally anti-efficient,” but he puts this forward as a point in its favour. if coiners understood externalities, they wouldn’t be coiners. at least he spelt attack of the foot blockchain  right. [wyofile]   any defense of bitcoin inevitably involves an argument where no sentence has anything to do with the proceeding sentence. at its most advanced levels, even clauses stop agreeing with each other https://t.co/x zvcpppye — gorilla warfare (again) (@menshevikm) april ,   lie dream of a casino soul crypto exchanges are already forbidden in china. so instead they run “over the counter” desks, matching buyers and sellers, which is different because, uh, reasons. there’s a fresh rumour that china is closing down, or has closed down, okex and huobi’s otc desks. chinese usdt holders are also rumoured to be dumping their tethers. in probably-related news, china has also placed even tighter restrictions on dealing in cryptos for actual money. [reuters] turkey has put crypto exchanges under masak, its financial regulator. the exchanges are daunted at the work required not to be incompetent money-laundering chop shops. turkey needed to put in the new rules to follow fatf guidance on virtual asset service providers, but having not one but two exchanges messily implode just recently was probably quite motivating. [resmî gazete, pdf, in turkish; decrypt; reuters] thailand mandates improved customer service for crypto traders! in-person id checks will now be required to get a new account at an exchange. [bangkok post] the irs’ plans to subpoena the kraken crypto exchange about its users have been approved by the court. [department of justice] the irs and the department of justice are just seeking information from binance holdings ltd, you understand. binance has not, as yet, been specifically accused of wrongdoing. [bloomberg] the us treasury has called for crypto transfers of value greater than $ , to be reported to the irs. [bloomberg]   decided to put my savings into cryptcurrency pic.twitter.com/ awicacpn — matt round (@mattround) may ,   things happen you can browse in absolute privacy on the tor network! except that last year, one in four tor exit nodes was compromised — the attackers were replacing bitcoin addresses in bitcoin mixer web pages with the attackers’ address. the attacker group is stil at it — last year they had % of nodes, this year they have %. [the record] two years after singulardtv shut down breaker mag, the site archive has finally rugpulled — all article urls give “ not found” or just don’t load. i’m pretty sure most of it is on the internet archive. [breaker, archive of may ] the suit in which dave kleiman’s estate is suing craig wright over the bitcoins that wright claimed he mined with kleiman is finally going to trial, starting on november . [order, pdf] goldman sachs has finally reopened a very limited crypto trading desk, apparently due to client demand. the desk is dealing only in futures and non-deliverable trades. it has executed at least two trades. [ft, paywalled] institutional investors speak on bitcoin! “our stance with clients is the -foot pole rule: stay away from it.” [ft, paywalled] the people’s bank of china has run another dc/ep trial in shehzhen, and the users respond! with a resounding “meh.” again. it still offers nothing to end-users over alipay or wechatpay, as they already found in shenzhen in november . [bloomberg] robinhood appears to procure its dogecoin from binance, for what that’s worth. [twitter] is there anything bitcoin can’t do? in knoxville, tennessee, a user applies bitcoin to marriage counselling — and it’s transparent and cryptographically verifiable! specifically, nelson paul replogle paid a hitman in bitcoin to kill his wife, and the fbi traced the transaction through coinbase to his ip. repogle is now facing murder-for-hire charges. [wate tv, archive] bitcoins only get you out of divorce settlements if you never spend them: “courts have the power to re-open divorce settlements years afterwards if non-disclosure can be proven; if you find out five years down the line that your ex has made a big purchase from unknown funds, it may be possible to re-open a previous divorce settlement.” [ft, paywalled]   i don’t think it’s enough of a pyramid scheme, which is why i encourage crypto enthusiasts to buy my “grow your coin” intro course $ . . if you bring in three friends we’ll throw in our intermediate course for % off—then, may we also interest you in our compound just by alb https://t.co/gcczcbnvdi — hannah gais (@hannahgais) may ,   hot takes frances coppola on tether’s smoke and mirrors: “it’s not reserves we should be worrying about, it’s capital. and the total lack of transparency.” [blog post] charlie munger and warren buffett are not fans of bitcoin. “i don’t welcome a currency that’s so useful to kidnappers and extortionists and so forth, nor do i like just shuffling out of your extra billions of billions of dollars to somebody who just invented a new financial product out of thin air. i think i should say modestly that the whole damn development is disgusting and contrary to the interests of civilization.” [cnbc]   instead of enrolling in the u of m's online cryptocurrency course, try this: • send me $ k of your parents money. • i'll send them an email saying you're very smart and totally weren't playing video games the whole time. pic.twitter.com/qrr il v p — lemon 🍋 (@ahoylemon) may ,   living on video expert on facebook’s latest digital currency attempt — me on ntd, in my capacity as the guy who literally wrote the book on libra. [ntd] i spoke to radio free asia about the bitcoin crash. the google translation is usable. [radio free asia, translation] the daily mail quotes me, accurately, on bitcoin’s environmental impact. [daily mail, archive] libra shrugged, reviewed by bill ryan: battling megaliths: facebook and the american government battle over digital currency and the future of money. [medium] this harry & paul sketch from is not about crypto and tether, except it totally is: [youtube]     the underside of the washington st bridge has taken a strong anti-nft stance pic.twitter.com/nsrjpllxg — karl stomberg (@kfosterstomberg) april ,   anyone: *mentions financial derivatives* me, nodding: d$/dx — dr. anna hughes (@annaghughes) april ,   so finally @saifedean's book landed on my desk. and i have to say i'm now a convert. any asset with a scarce supply, not controlled by humans, must inevitably become the new world reserve currency. if you don't get this please learn monetary economics. pic.twitter.com/xsik ehrb — bernhard 'game theory optimal trading' mueller (@muellerberndt) may ,   meanwhile, the iraqi dinar had a bad , but has been steady this whole year. pic.twitter.com/sswm m pz — travis view (@travis_view) may ,   tell me elon's tweeting again without telling me elon's tweeting again pic.twitter.com/femro gsm — kyla (@kylascan) may ,   #dogelon 😂 pic.twitter.com/wlu b gdh — king bach (@kingbach) may ,   your subscriptions keep this site going. sign up today! share this: click to share on twitter (opens in new window) click to share on facebook (opens in new window) click to share on linkedin (opens in new window) click to share on reddit (opens in new window) click to share on telegram (opens in new window) click to share on hacker news (opens in new window) click to email this to a friend (opens in new window) taggedamazonaxaazurebank of italybeeplebinanceblockfibram cohenbreakercbdcchiachinacraig wrightcybernewsdapper labsdark sidedave kleimandavid dodsondcepdefi dogecoindominic cummingsecbethereumgoldman sachshealth service executivehetznerhuobiimogen heapinner mongoliairelandirskrakenlinksmasakmetakovanminingnba top shotsnelson paul reploglenew yorknew zealandnftnicehashokexotcpbocransomwarerobinhoodsingulardtvsophostetherthailandtortoshiba tecturkeytuslaunited statesus treasurywyomingyves klein post navigation previous article bitcoin crashed today — it was market shenanigans, not china next article foreign policy: chia is a new way to waste resources for cryptocurrency comments on “news: everybody hates chia, defi rugpull, china versus mining, china versus crypto” austin george loomis says: rd may at : am in exchange for a sum of solid gold, klein would imbue a patch of thin air with his artistic aura and provide a receipt. for everyone who thought michael craig-martin’s an oak tree was too tangible. reply david gerard says: rd may at : pm notice he only threw half the gold into the seine reply ingvar says: rd may at : pm original source (or, at least, where i saw it) was on file , attributed to daniel dern. but, as you were writing about nfts at the time, it was imperative to pass it on. reply david gerard says: rd may at : pm credit updated! reply leave a reply cancel reply your email address will not be published. required fields are marked * comment name * email * website notify me of follow-up comments by email. notify me of new posts by email. this site uses akismet to reduce spam. learn how your comment data is processed. search for: click here to get signed copies of the books!   get blog posts by email! email address subscribe support this site on patreon! hack through the blockchain bafflegab: $ /month for early access to works in progress! $ /month for early access and even greater support! $ /month corporate rate, for your analyst newsletter budget! buy the books! libra shrugged us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores attack of the foot blockchain us paperback uk/europe paperback isbn- : kindle: uk, us, australia, canada (and all other kindle stores) — no drm google play books (pdf) apple books kobo smashwords other e-book stores available worldwide  rss - posts  rss - comments recent blog posts news: the senate has mild contempt for bitcoin, tether and usdc attestations, defi money market and poloniex settle with sec, poly network hack news: el salvador, binance vs malaysia, goldman sachs non-blockchain etf, virgil griffith tether criminally investigated by justice department — when the music stops podcast news: el salvador colón-dollar, everybody hates blockfi, tether does cnbc summer reading for the cryptocurrency skeptic: part excerpts from the book table of contents the conspiracy theory economics of bitcoin dogecoin buterin’s quantum quest icos: magic beans and bubble machines ethereum smart contracts in practice the dao: the steadfast iron will of unstoppable code business bafflegab, but on the blockchain imogen heap: “tiny human”. total sales: $ . index about press coverage for attack of the foot blockchain press coverage for libra shrugged my cryptocurrency and blockchain press commentary and writing facebook author page about the author contact the content of this site is journalism and personal opinion. nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. do not act on any opinion expressed here without consulting a qualified professional. i do not hold a position in any crypto asset or cryptocurrency or blockchain company. amazon product links on this site are affiliate links — as an amazon associate i earn from qualifying purchases. (this doesn’t cost you any extra.) copyright © – david gerard powered by wordpress and hitmag. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. ruby on rails solarwinds uses cookies on its websites to make your online experience easier and better. by using our website, you consent to our use of cookies. for more information on cookies, see our cookie policy. continue visit solarwinds.com documentation contact us customer portal toggle navigation academy solarwinds academy classes guided curriculum elearning certification solarwinds academy the solarwinds academy offers education resources to learn more about your product. the curriculum provides a comprehensive understanding of our portfolio of products through virtual classrooms, elearning videos, and professional certification. see what's offered available resources virtual classrooms calendar elearning video index solarwinds certified professional program virtual classrooms attend virtual classes on your product and a wide array of topics with live instructor sessions or watch on-demand videos to help you get the most out of your purchase. find a class open sessions and popular classes general office hours orion platform network performance monitor netflow traffic analyzer see all ip address manager network configuration manager server & application monitor virtualization manager guided curriculum whether learning a newly-purchased solarwinds product or finding information to optimize the software you already own, we have guided product training paths that help get customers up to speed quickly. view suggested paths elearning on-demand videos on installation, optimization, and troubleshooting. see all videos popular videos upgrading isn't as daunting as you may think upgrading your orion platform deployment using microsoft azure upgrading from the orion platform . to . don't let the gotchas get you how to install npm and other orion platform products upgrading the orion platform see all videos navigating the web console prepare a sam installation installing server & application monitor how to install sem on vmware customer success with the solarwinds support community new job, new to solarwinds? solarwinds certified professional program become a solarwinds certified professional to demonstrate you have the technical expertise to effectively set up, use, and maintain solarwinds’ products. learn more study aids access rights manager architecture and design database performance analyzer diagnostics netflow traffic analyzer network configuration manager network performance monitor server & application monitor onboarding & upgrading new to solarwinds orion assistance program upgrade resource center support offerings smartstart what’s new upgrade resource center see helpful resources, answers to frequently asked questions, available assistance options, and product-specific details to make your upgrade go quickly and smoothly. visit the upgrade resource center product-specific upgrade resources network performance monitor netflow traffic analyzer network configuration manager server & application monitor storage resource monitor virtualization manager web performance monitor log analyzer orion assistance program this program connects you with professional consulting resources who are experienced with the orion platform and its products. these services are provided at no additional charge for customers who were/are running one of the orion platform versions affected by sunburst or supernova. learn more support offerings our customer support plans provide assistance to install, upgrade, and troubleshoot your product. choose what best fits your environment and organization, and let us help you get the most out of your purchase. we support all our products, / / . learn more available programs professional premier premier enterprise smartstart our smartstart programs help you install and configure or upgrade your product. get assistance from solarwinds’ technical support experts with our onboarding and upgrading options. we also offer a self-led program for network performance monitor (npm) and server & application monitor (sam) if you need help doing it yourself. learn more available programs smartstart for onboarding smartstart for upgrading smartstart self-led for npm and sam what’s new at solarwinds find the latest release notes, system requirements, and links to upgrade your product. learn more new to solarwinds you just bought your first product. now what? find out more about how to get the most out of your purchase. from installation and configuration to training and support, we've got you covered. learn more support offerings premier support smartstart working with support premier support we offer paid customer support programs to assist you with installation, upgrading and troubleshooting. choose what best fits your environment and budget to get the most out of your software. get priority call queuing and escalation to an advanced team of support specialist. available programs premier support premier enterprise support smartstart our smartstart paid programs are intended help you install and configure or upgrade your product. you’ll be assisted by solarwinds’ technical support experts who are dedicated to quickly and efficiently help you with getting up and running or moving to the latest version of your product. available programs smartstart for onboarding smartstart for upgrading working with support working with support a glossary of support availability, tips, contact info, and customer success resources. we're here to help. learn more products network management systems management database management it security it service management application management documentation network management orion platform network performance monitor netflow traffic analyzer ip address manager network configuration manager engineer's toolset view all network management products network topology mapper user device tracker voip network quality manager log analyzer enterprise operations console your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class systems management server & application monitor virtualization manager storage resource monitor web performance monitor server configuration monitor backup view all systems management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class it security security event manager access rights manager serv-u managed file transfer server serv-u ftp server patch manager identity monitor view all it security products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class database management database performance analyzer database performance monitor view all database management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class it service management dameware remote everywhere dameware remote support dameware mini remote control service desk web help desk view all it service management products kiwi syslog server kiwi cattools ipmonitor mobile admin your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class application management appoptics pingdom papertrail loggly view all application management products your solarwinds products come with a secret weapon. award-winning, instructor-led classes, elearning videos, and certifications. find a class community thwack® orange matter logicalread thwack® over , users—get help, be heard, improve your product skills visit thwack available programs solarwinds lab thwack tuesday tips (ttt) thwackcamp™ on-demand orange matter practical advice on managing it infrastructure from up-and-coming industry voices and well-known tech leaders view orange matter logicalread blog articles, code, and a community of database experts read the blog submit a ticket academy solarwinds academy see what's offered virtual classrooms calendar elearning video index solarwinds certified professional program classes find a class general office hours orion platform network performance monitor netflow traffic analyzer see all ip address manager network configuration manager server & application monitor virtualization manager guided curriculum view suggested paths elearning see all videos upgrading isn't as daunting as you may think upgrading your orion platform deployment using microsoft azure upgrading from the orion platform . to . don't let the gotchas get you how to install npm and other orion platform products upgrading the orion platform see all videos navigating the web console prepare a sam installation installing server & application monitor how to install sem on vmware customer success with the solarwinds support community new job, new to solarwinds? certification learn more access rights manager architecture and design database performance analyzer diagnostics netflow traffic analyzer network configuration manager network performance monitor server & application monitor onboarding & upgrading new to solarwinds learn more orion assistance program learn more upgrade resource center visit the upgrade resource center network performance monitor netflow traffic analyzer network configuration manager server & application monitor storage resource monitor virtualization manager web performance monitor log analyzer support offerings learn more professional premier premier enterprise smartstart learn more smartstart for onboarding smartstart for upgrading smartstart self-led for npm and sam what’s new learn more support offerings premier support premier support premier enterprise support smartstart smartstart for onboarding smartstart for upgrading working with support working with support learn more products network management orion platform network performance monitor netflow traffic analyzer ip address manager network configuration manager engineer's toolset view all network management products network topology mapper user device tracker voip network quality manager log analyzer enterprise operations console systems management server & application monitor virtualization manager storage resource monitor web performance monitor server configuration monitor backup view all systems management products it security security event manager access rights manager serv-u managed file transfer server serv-u ftp server patch manager identity monitor view all it security products database management database performance analyzer database performance monitor view all database management products it service management dameware remote everywhere dameware remote support dameware mini remote control service desk web help desk view all it service management products kiwi syslog server kiwi cattools ipmonitor mobile admin application management appoptics pingdom papertrail loggly view all application management products documentation community thwack® visit thwack orange matter view orange matter logicalread read the blog submit a ticket documentation forpapertrail ruby on rails to send ruby on rails request logs, either: use papertrail's tiny remote_syslog daemon to read an existing log file (like production.log), or change rails' environment config to use the remote_syslog_logger gem. we recommend remote_syslog because it works for other text files (like nginx and mysql), has no impact on the rails app, and is easy to set up. also see controlling verbosity. send log file with remote_syslog install remote_syslog download the current release. to extract it and copy the binary into a system path, run: copy $ tar xzf ./remote_syslog*.tar.gz $ cd remote_syslog $ sudo cp ./remote_syslog /usr/local/bin rpm and debian packages are also available. configure paths to log file(s) can be specified on the command-line, or save log_files.yml.example as /etc/log_files.yml. edit it to define: path to your rails log file (such as production.log) and any other log file(s) that remote_syslog should watch. the destination host and port provided under log destinations. if no destination port was provided, set host to logs.papertrailapp.com and remove the port config line to use the default port ( ). the remote_syslog readme has complete documentation and more examples. start start the daemon: copy $ sudo remote_syslog logs should appear in papertrail within a few seconds of being written to the on-disk log file. problem? see troubleshooting. remote_syslog requires read permission on the log files it is monitoring. auto-start remote_syslog can be automated to start at boot using init scripts (examples) or your preferred daemon invocation method, such as monit or god. see remote_syslog --help or the full readme on github. troubleshooting see remote_syslog troubleshooting. send events with the remote_syslog_logger gem install remote syslog logger the easiest way to install remote_syslog_logger is with bundler. add remote_syslog_logger to your gemfile. if you are not using a gemfile, run: copy $ gem install remote_syslog_logger configure rails environment change the environment configuration file to log via remote_syslog_logger. this is almost always in config/environment.rb (to affect all environments) or config/environments/.rb, such as config/environments/production.rb (to affect only a specific environment). add this line: copy config.logger = remotesysloglogger.new('logsn.papertrailapp.com', xxxxx) you can also specify a program name other than the default rails: copy config.logger = remotesysloglogger.new('logsn.papertrailapp.com', xxxxx, :program => "rails-#{rails_env}") where logsn and xxxxx are the name and port number shown under log destinations. alternatively, to point the logs to your local system, use localhost instead of logsn.papertrailapp.com, for the port, and ensure that the system’s syslog daemon is bound to . . . . a basic rsyslog config would consist of the following lines in /etc/rsyslog.conf: copy $modload imudp $udpserverrun verify configuration to send a test message, start script/console in an environment which has the syslog config above (for example, rails_env=production script/console). run: copy rails_default_logger.error "salutations!" the message should appear on the system's message history within minute. verbosity for more information on improving the signal:noise ratio, see the dedicated help article here. lograge we recommend using lograge in lieu of rails’ standard logging. add lograge to your gemfile and smile. log user id, customer id, and more use lograge to include other attributes in log messages, like a user id or request id. the readme has more. here’s a simple example which captures attributes: copy class applicationcontroller < actioncontroller::base   before_filter :append_info_to_payload   def append_info_to_payload(payload)     super     payload[:user_id] = current_user.try(:id)     payload[:host] = request.host     payload[:source_ip] = request.remote_ip   end end the attributes are logged in production.rb: with this block: copy config.lograge.custom_options = lambda do |event|   event.payload end the payload hash populated during the request above is automatically available as event.payload. payload automatically contains the params hash as params. here's another production.rb example which only logs the request params: copy config.lograge.custom_options = lambda do |event|   params = event.payload[:params].reject do |k|     ['controller', 'action'].include? k   end   { "params" => params } end troubleshooting colors and/or ansi character codes appear in my log messages by default, rails generates colorized log messages for non-production environments and monochromatic logs in production. papertrail renders any ansi color codes it receives (see more colorful logging with ansi color codes), so you can decide whether to enable this for any environment. to enable or disable ansi logging, change this option in your environment configuration file (such as config/environment.rb or config/environments/staging.rb). the example below disables colorized logging. rails >= .x: copy config.colorize_logging = false rails .x: copy config.active_record.colorize_logging = false see: http://guides.rubyonrails.org/configuring.html#rails-general-configuration the scripts are not supported under any solarwinds support program or service. the scripts are provided as is without warranty of any kind. solarwinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. the risk arising out of the use or performance of the scripts and documentation stays with you. in no event shall solarwinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation. we’re geekbuilt.® developed by network and systems engineers who know what it takes to manage today's dynamic it environments, solarwinds has a deep connection to the it community. the result? it management products that are effective, accessible, and easy to use. company investors career center resource center email preference center for customers for government gdpr resource center solarwinds trust center legal documents privacy california privacy rights security information documentation & uninstall information sitemap © solarwinds worldwide, llc. all rights reserved. kaseya vsa limited disclosure | divd csirt skip to the content. home / blog / kaseya vsa limited disclosure divd csirt making the internet safer through coordinated vulnerability disclosure menu home divd csirt cases divd- - telegram group shares stolen credentials.... divd- - divd recommends not exposing the on-premise kaseya unitrends servers to the... divd- - botnet stolen credentials... divd- - multiple vulnerabilities discovered in kaseya vsa.... divd- - a preauth rce vulnerability has been found in vcenter server... divd- - a preauth rce vulnerability has been found in pulse connect secure... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - kaseya recommends disabling the on-premise kaseya vsa servers immediately.... divd- - on-prem exchange servers targeted with -day exploits... divd- - solarwinds orion api authentication bypass... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - a list of vulnerable fortinet devices leaked online... divd- - four critical vulnerabilities in vembu bdr... divd- - wordpress plugin wpdiscuz has a vulnerability that alllows attackers to tak... divd- - data dumped from compromised pulse secure vpn enterprise servers.... divd- - .nl domains running wordpress scanned.... divd- - citrix sharefile storage zones controller multiple security updates... divd- - smbv server compression transform header memory corruption... divd- - apache tomcat ajp file read/inclusion vulnerability... divd- - list of mirai botnet victims published with credentials... divd- - exploits available for ms rdp gateway bluegate... divd- - wildcard certificates citrix adc... divd- - citrix adc... cves cve- - - authenticated xml external entity vulnerability in kaseya vs... cve- - - authenticated local file inclusion in kaseya vsa < v . . ... cve- - - fa bypass in kaseya vsa cve- - - authenticated authenticated reflective xss in kaseya vsa cve- - - unautheticated rce in kaseya vsa < v . . ... cve- - - autheticated sql injection in kaseya vsa < v . . ... cve- - - unautheticated credential leak and business logic flaw in ka... cve- - - unauthenticated server side request forgery in vembu product... cve- - - unauthenticated arbitrary file upload and command execution ... cve- - - unauthenticated remote command execution in vembu products... blog - - : kaseya vsa limited disclosure... - - : kaseya case update ... - - : kaseya case update ... - - : kaseya case update... - - : kaseya vsa advisory... - - : vcenter server preauth rce... - - : warehouse botnet... - - : closing proxylogon case / case proxylogon gesloten... - - : vembu zero days... - - : pulse secure preauth rce... more... donate rss contact kaseya vsa limited disclosure jul - frank breedijk english below why we are only disclosing limited details on the kaseya vulnerabilities last weekend we found ourselves in the middle of a storm. a storm created by the ransomware attacks executed via kaseya vsa, using a vulnerability which we confidentially disclosed to kaseya, together with six other vulnerabilities. ever since we released the news that we indeed notified kaseya of a vulnerability used in the ransomware attack, we have been getting requests to release details about these vulnerabilities and the disclosure timeline. in line with the guidelines for coordinated vulnerability disclosure we have not disclosed any details so far. and, while we feel it is time to be more open about this process and our decisions regarding this matter, we will still not release the full details. why the secrecy? as the ransomware attack using kaseya vsa software has shown, the effects of a malicious actor knowing the full details of a vulnerability can be devastating. this immediately poses a dilemma to anybody that discovers a critical vulnerability in a critical piece of software, do we disclose the details or not? let’s use an analogy. say a security researcher discovers a vulnerability in a high-end car. when you kick the left rear bumper in just the right way, the car doors open, and the engine starts. what should the researcher do? tell everybody, tell all the owners of this type of car, or inform the manufacturer so he can recall and fix the car? if the full details are made public, it is evident that many cars will get stolen very soon. if you inform the owners, this will likely happen too. the chances of the details remaining secret are slim if you inform a broad audience. even if you limit the details to ‘a security issue involving the bumper’, you might tip off the wrong people. telling the manufacturer there is a good chance that he comes up with a fix before large-scale car thefts are happening, and consider if you need to tell the owners to keep their car behind closed doors in the meantime. how does this relate to kaseya vsa? when we discovered the vulnerabilities in early april, it was evident to us that we could not let these vulnerabilities fall into the wrong hands. after some deliberation, we decided that informing the vendor and awaiting the delivery of a patch was the right thing to do. we hypothesized that, in the wrong hands, these vulnerabilities could lead to the compromise of large numbers of computers managed by kaseya vsa. as we stated before, kaseya’s response to our disclosure has been on point and timely; unlike other vendors, we have previously disclosed vulnerabilities to. they listened to our findings, and addressed some of them by releasing a patch resolving a number of these vulnerabilities. followed by a second patch resolving even more. we’ve been in contact with kaseya ahead of the release of both these patches, allowing us to validate that these vulnerabilities had indeed been resolved by the patch in development. unfortunately, the worst-case scenario came true on friday the nd of july. kaseya vsa was used in an attack to spread ransomware, and kaseya was compelled to use the nuclear option: shutting down their kaseya cloud and advising customers to turn off their on-premise kaseya vsa servers. a message that unfortunately arrived too late for some of their customers. we later learned that one of the two vulnerabilities used in the attack was one we previously disclosed to kasya vsa. what can we tell? in this blogpost and divd case divd- - we publish the timeline and limited details of the vulnerabilities we notified kaseya of. full disclosure? given the serious nature of these vulnerabilities and the obvious consequences of abuse of kaseya vsa we will not disclose the full details of the vulnerabilities until such time that kaseya has released a patch and this patch has been installed on a sufficient number of systems, something for which we have the monitoring scripts. in the past few days we have been working with kaseya to make sure customers turn off their systems, by tipping them off about customers that still have systems online, and hope to be able to continue to work together to ensure that their patch is installed everywhere. we have no indication that kaseya is hesitant to release a patch. instead they are still working hard to make sure that after their patch the system is as secure as possible, to avoid a repeat of this scenario. therefore we do not feel the need to lay down any kind of deadline for full disclosure at this point in time. a properly patched and secure kaseya vsa is in the best interest of security of kaseya customers and the internet at large. the vulnerabilities we notified kaseya of the following vulnerabilities: cve- - - a credentials leak and business logic flaw, resolution in progress. cve- - - an sql injection vulnerability, resolved in may th patch. cve- - - a remote code execution vulnerability, resolved in april th patch. (v . . ) cve- - - a cross site scripting vulnerability, resolution in progress. cve- - - fa bypass, resolution in progress. cve- - - a local file inclusion vulnerability, resolved in may th patch. cve- - - a xml external entity vulnerability, resolved in may th patch. timeline date description apr research start apr divd starts scanning internet-facing implementations. apr start of the identification of possible victims (with internet-facing systems). apr kaseya informed. apr vendor starts issuing patches v . . . resolving cve- - . may vendor issues another patch v . . . resolving cve- - , cve- - , cve- - . jun divd csirt hands over a list of identified kaseya vsa hosts to kaseya. jun . . on saas resolving cve- - and cve- - . jul divd responds to the ransomware, by scanning for kaseya vsa instances reachable via the internet and sends out notifications to network owners jul limited publication (after months).  twitter  facebook  linkedin aol - wikipedia aol from wikipedia, the free encyclopedia jump to navigation jump to search american web portal and online service provider for other uses, see aol (disambiguation). aol aol headquarters, broadway, new york city formerly control video corporation ( – ) quantum computer services ( – ) america online ( – ) aol time warner ( – ) type subsidiary founded ;  years ago ( ) (as control video corporation)  ( ) (as aol inc.) founders marc seriff steve case jim kimsey william von meister headquarters broadway, new york city , united states area served worldwide services web portal and online services number of employees , parent aol time warner ( – ) verizon communications ( – ) oath ( – ) verizon media ( –present) website www.aol.com aol (stylized as aol., formerly a company known as aol inc. and originally known as america online) is an american web portal and online service provider based in new york city. it is a brand marketed by verizon media. the service traces its history to an online service known as playnet. playnet licensed its software to quantum link (q-link), who went online in november . a new ibm pc client launched in , eventually renamed as america online in . aol grew to become the largest online service, displacing established players like compuserve and the source. by , aol had about three million active users.[ ] aol was one of the early pioneers of the internet in the mid- s, and the most recognized brand on the web in the united states. it originally provided a dial-up service to millions of americans, as well as providing a web portal, e-mail, instant messaging and later a web browser following its purchase of netscape. in , at the height of its popularity, it purchased the media conglomerate time warner in the largest merger in u.s. history. aol rapidly shrank thereafter, partly due to the decline of dial-up and rise of broadband.[ ] aol was eventually spun off from time warner in , with tim armstrong appointed the new ceo. under his leadership, the company invested in media brands and advertising technologies. on june , , aol was acquired by verizon communications for $ .  billion.[ ][ ] on may , , verizon announced it would sell yahoo and aol to apollo for $ billion.[ ] contents history . – : early years . – : internet age, time warner merger . – : rebranding and decline . – : as a digital media company . – : division of verizon . –present: division of apollo products and services . content . advertising . membership . aol desktop criticism . community leaders . billing disputes . account cancellation . direct marketing of disks . software . usenet newsgroups . terms of service (tos) . certified email . search data . user list exposure . aol's computer checkup "scareware" . nsa prism program . hosting of user profiles changed, then discontinued see also references external links history – : early years aol began in , as a short-lived venture called control video corporation (or cvc), founded by william von meister. its sole product was an online service called gameline for the atari video game console, after von meister's idea of buying music on demand was rejected by warner bros.[ ] subscribers bought a modem from the company for us$ . and paid a one-time us$ setup fee. gameline permitted subscribers to temporarily download games and keep track of high scores, at a cost of us$ per game.[ ] the telephone disconnected and the downloaded game would remain in gameline's master module and playable until the user turned off the console or downloaded another game. in january , steve case was hired as a marketing consultant for control video on the recommendation of his brother, investment banker dan case. in may , jim kimsey became a manufacturing consultant for control video, which was near bankruptcy. kimsey was brought in by his west point friend frank caufield, an investor in the company.[ ] in early , von meister left the company.[ ] on may , , quantum computer services, an online services company, was founded by jim kimsey from the remnants of control video, with kimsey as chief executive officer, and marc seriff as chief technology officer. the technical team consisted of marc seriff, tom ralston, ray heinrich, steve trus, ken huntsman, janet hunter, dave brown, craig dykstra, doug coward, and mike ficco. in , case was promoted again to executive vice-president. kimsey soon began to groom case to take over the role of ceo, which he did when kimsey retired in .[ ] kimsey changed the company's strategy, and in , launched a dedicated online service for commodore and computers, originally called quantum link ("q-link" for short).[ ] the quantum link software was based on software licensed from playnet, inc, (founded in by howard goldberg and dave panzl). the service was different from other online services as it used the computing power of the commodore and the apple ii rather than just a "dumb" terminal. it passed tokens back and forth and provided a fixed price service tailored for home users. in may , quantum and apple launched applelink personal edition for apple ii[ ] and macintosh computers. in august , quantum launched pc link, a service for ibm-compatible pcs developed in a joint venture with the tandy corporation. after the company parted ways with apple in october , quantum changed the service's name to america online.[ ][ ] case promoted and sold aol as the online service for people unfamiliar with computers, in contrast to compuserve, which was well established in the technical community.[ ] from the beginning, aol included online games in its mix of products; many classic and casual games were included in the original playnet software system. in the early years of aol the company introduced many innovative online interactive titles and games, including: graphical chat environments habitat ( – ) and club caribe ( ) from lucasarts. the first online interactive fiction series quantumlink serial by tracy reed ( ). quantum space, the first fully automated play-by-mail game ( – ). – : internet age, time warner merger first aol logo as "america online", used from to . in february , aol for dos was launched using a geoworks interface followed a year later by aol for windows.[ ] this coincided with growth in pay-based online services, like prodigy, compuserve, and genie. also saw the introduction of an original dungeons & dragons title called neverwinter nights from stormfront studios; which was one of the first multiplayer online role playing games to depict the adventure with graphics instead of text.[ ] during the early s, the average subscription lasted for about months and accounted for $ in total revenue. advertisements invited modem owners to "try america online free", promising free software and trial membership.[ ] aol discontinued q-link and pc link in late . in september , aol added usenet access to its features.[ ] this is commonly referred to as the "eternal september", as usenet's cycle of new users was previously dominated by smaller numbers of college and university freshmen gaining access in september and taking a few weeks to acclimate. this also coincided with a new "carpet bombing" marketing campaign by cmo jan brandt to distribute as many free trial aol trial disks as possible through nonconventional distribution partners. at one point, % of the cds produced worldwide had an aol logo.[ ] aol quickly surpassed genie, and by the mid- s, it passed prodigy (which for several years allowed aol advertising) and compuserve.[ ] over the next several years, aol launched services with the national education association, the american federation of teachers, national geographic, the smithsonian institution, the library of congress, pearson, scholastic, ascd, nsba, ncte, discovery networks, turner education services (cnn newsroom), npr, the princeton review, stanley kaplan, barron's, highlights for kids, the u.s. department of education, and many other education providers. aol offered the first real-time homework help service (the teacher pager— ; prior to this, aol provided homework help bulletin boards), the first service by children, for children (kids only online, ), the first online service for parents (the parents information network, ), the first online courses ( ), the first omnibus service for teachers (the teachers' information network, ), the first online exhibit (library of congress, ), the first parental controls, and many other online education firsts.[ ] aol purchased search engine webcrawler in , but sold it to excite the following year; the deal made excite the sole search and directory service on aol.[ ] after the deal closed in march , aol launched its own branded search engine, based on excite, called netfind. this was renamed to aol search in .[ ] america online . software for microsoft windows ( ) aol charged its users an hourly fee until december ,[ ] when the company changed to a flat monthly rate of $ . .[ ] during this time, aol connections were flooded with users trying to connect, and many canceled their accounts due to constant busy signals. a commercial was made featuring steve case telling people aol was working day and night to fix the problem. within three years, aol's user base grew to  million people. in aol was headquartered at westwood center drive in the tysons corner cdp in unincorporated fairfax county, virginia,[ ][ ] near the town of vienna.[ ] aol was quickly running out of room in october for its network at the fairfax county campus. in mid- , aol moved to aol way in dulles, unincorporated loudoun county, virginia to provide room for future growth.[ ] in a five-year landmark agreement with the most popular operating system, aol was bundled with windows software.[ ] on march , , the short-lived eworld was purchased by aol. in , about half of all u.s. homes with internet access had it through aol.[ ] during this time, aol's content channels, under jason seiken, including news, sports, and entertainment, experienced their greatest growth as aol become the dominant online service internationally with more than  million subscribers. in november , aol announced it would acquire netscape, best known for their web browser, in a major $ .  billion deal.[ ] the deal closed on march , . another large acquisition in december was that of mapquest, for $ .  billion.[ ] in january , as new broadband technologies were being rolled out around nyc metropolitan area, and the u.s., aol and time warner announced plans to merge, forming aol time warner, inc. the terms of the deal called for aol shareholders to own % of the new, combined company. the deal closed on january , . the new company was led by executives from aol, sbi, and time warner. gerald levin, who had served as ceo of time warner, was ceo of the new company. steve case served as chairman, j. michael kelly (from aol) was the chief financial officer, robert w. pittman (from aol) and dick parsons (from time warner) served as co-chief operating officers.[ ] in , jonathan miller became ceo of aol.[ ] the following year, aol time warner dropped the "aol" from its name. it was the largest merger in history when completed with the combined value of the companies at $  billion. this value fell sharply, as low as $  billion as markets repriced aol's valuation as a pure internet firm more modestly when combined with the traditional media and cable business. this state didn't last long, and the company's value rose again within months. by the end of that year, the tide had turned against "pure" internet companies, with many collapsing under falling stock prices, and even the strongest companies in the field losing up to % of their market value. the decline continued though , but even with the losses, aol was among the internet giants that continued to outperform brick and mortar companies.[ ] in , along with the launch of aol . optimized, aol also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. in , aol broadcast the live concert live over the internet, and thousands of users downloaded clips of the concert over the following months.[ ] in late , aol released aol safety & security center, a bundle of mcafee antivirus, ca anti-spyware, and proprietary firewall and phishing protection software.[ ] news reports in late identified companies such as yahoo!, microsoft, and google as candidates for turning aol into a joint venture.[ ] those plans were abandoned when it was revealed on december , , that google would purchase a % share of aol for $  billion. – : rebranding and decline former aol logo, used from to on april , , aol announced it was retiring the full name america online; the official name of the service became aol, and the full name of the time warner subdivision became aol llc.[ ] on june , ,[ ] aol offered a new program called aol active security monitor, a diagnostic tool which checked the local pc's security status, and recommended additional security software from aol or download.com. the program rated the computer on a variety of different areas of security and general computer health. two months later,[ ] aol released aol active virus shield. this software was developed by kaspersky lab. active virus shield software was free and did not require an aol account, only an internet email address. the isp side of aol uk was bought by carphone warehouse in october to take advantage of their , llu customers, making carphone warehouse the biggest llu provider in the uk.[ ] decline in aol u.s. subscribers q – q , with a significant drop from q onward. in august , aol announced they would give away email accounts and software previously available only to its paying customers provided the customer accessed aol or aol.com through a non-aol-owned access method (otherwise known as "third party transit", "bring your own access", or "byoa"). the move was designed to reduce costs associated with the "walled garden" business model by reducing usage of aol-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider, aol.com.[ ] the change from paid to free was also designed to slow the rate of members canceling their accounts and defecting to microsoft hotmail, yahoo!, or other free email providers. the other free services included:[ ] aim (aol instant messenger) aol video[ ] featured professional content and allowed users to upload videos as well. aol local, comprising its cityguide,[ ] yellow pages[ ] and local search[ ] services to help users find local information like restaurants, local events, and directory listings. aol news aol my eaddress, a custom domain name for email addresses. these email accounts could be accessed in a manner similar to other aol and aim email accounts. xdrive, which was a service offered by aol, allowed users to back up their files over the internet.[ ] it was acquired by aol on august , [ ] and closed on december , .[ ] it offered a free gb account (free online file storage) to anyone with an aol screenname.[ ] xdrive also provided remote backup services and  gb of storage for a $ . per month fee.[ ] also that month, aol informed its us customers it would be increasing the price of its dial-up access to us$ . . the increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same price they had been charging for monthly dsl access.[ ] however, aol has since started offering their services for $ . a month for unlimited dial-up access.[ ] on november , , randy falco succeeded jonathan miller as ceo.[ ] in december , aol closed their last remaining call center in the united states, "taking the america out of america online" according to industry pundits. service centers based in india and the philippines continue to this day to provide customer support and technical assistance to subscribers.[ ] an aol mobile sign at gsma barcelona, spain, on september , , aol announced it was moving one of its corporate headquarters from dulles, virginia, to new york city[ ] and combining its various advertising units into a new subsidiary called platform a. this action followed several advertising acquisitions, most notably advertising.com, and highlighted the company's new focus on advertising-driven business models. aol management stressed "significant operations" will remain in dulles, which included the company's access services and modem banks. in october , aol announced it would move one of its other headquarters from loudoun county, virginia, to new york city; it would continue to operate its virginia offices.[ ] as part of the impending move to new york and the restructuring of responsibilities at the dulles headquarters complex after the reston move, aol ceo randy falco announced on october , , plans to lay off , employees worldwide by the end of , beginning "immediately".[ ] the end result was a near % layoff across the board at aol. most compensation packages associated with the october layoffs included a minimum of days of severance pay, of which were given in lieu of the -day advance notice requirement by provisions of the federal warn act.[ ] by november , aol's customer base had been reduced to .  million subscribers,[ ] just narrowly ahead of comcast and at&t yahoo!. according to falco, as of december , the conversion rate of accounts from paid access to free access was over %.[ ] on january , , aol announced the closing of one of its three northern virginia data centers, reston technology center, and sold it to crg west.[ ] on february , time warner ceo jeff bewkes announced time warner would split aol's internet access and advertising businesses in two, with the possibility of later selling the internet access division.[ ] on march , , aol purchased the social networking site bebo for $ m (£ m).[ ] on july , aol announced it was shedding xdrive, aol pictures, and bluestring to save on costs and focus on its core advertising business.[ ] aol pictures was terminated on december . on october , aol hometown (a web hosting service for the websites of aol customers) and the aol journal blog hosting service were eliminated.[ ] – : as a digital media company the aol 'eraser' logo, in use since on march , , tim armstrong, formerly with google, was named chairman and ceo of aol.[ ] shortly thereafter, on may , time warner announced it would spin off aol as an independent company once google's shares ceased at the end of the fiscal year.[ ] on november , aol unveiled a sneak preview of a new brand identity which has the wordmark "aol." superimposed onto canvases created by commissioned artists. the new identity, designed by wolff olins,[ ] was enacted onto all of aol's services on december , the date aol traded independently for the first time since the time warner merger on the new york stock exchange under the symbol aol.[ ] on april , , aol announced plans to shut down or sell bebo;[ ] on june , the property was sold to criterion capital partners for an undisclosed amount, believed to be around $  million.[ ] in december, aim eliminated access to aol chat rooms noting a marked decline of patronage in recent months.[ ] under armstrong's leadership, aol began taking steps in a new business direction, marked by a series of acquisitions. on june , , aol had already announced the acquisition of patch media, a network of community-specific news and information sites which focuses on individual towns and communities.[ ] on september , , at the san francisco techcrunch disrupt conference, aol signed an agreement to acquire techcrunch to further its overall strategy of providing premier online content.[ ][ ] on december , , aol acquired about.me, a personal profile and identity platform, four days after that latter's public launch.[ ] on january , , aol announced the acquisition of european video distribution network, goviral.[ ] in march , aol acquired huffpost for $  million.[ ][ ] shortly after the acquisition was announced, huffington post co-founder arianna huffington replaced aol content chief david eun, assuming the role of president and editor-in-chief of the aol huffington post media group.[ ] on march , aol announced it would cut around workers due to the huffpost acquisition.[ ] on september , , aol formed a strategic ad selling partnership with two of its largest competitors, yahoo and microsoft. according to the new partnership, the three companies would begin selling inventory on each other's sites. the strategy was designed to help them compete with google and ad networks.[ ] on february , , aol partnered with pbs to launch makers, a digital documentary series focusing on high-achieving women in male-dominated industries such as war, comedy, space, business, hollywood and politics.[ ][ ][ ] subjects for makers episodes have included oprah winfrey, hillary clinton, sheryl sandberg, martha stewart, indra nooyi, lena dunham, and ellen degeneres. on march , , aol announced the acquisition of hipster, a mobile photo-sharing app for an undisclosed amount.[ ] on april , , aol announced a deal to sell patents to microsoft for $ .  billion. the deal includes a "perpetual" license for aol to use these patents.[ ] in april, aol took several steps to expand its ability to generate revenue through online video advertising. the company announced it would offer gross rating point (grp) guarantee for online video, mirroring the tv ratings system and guaranteeing audience delivery for online video advertising campaigns bought across its properties.[ ] this announcement came just days before the digital content newfront (dcnf) a two-week event held by aol, google, hulu, microsoft, vevo and yahoo to showcase the participating sites' digital video offerings. the digital content newfront were conducted in advance of the traditional television upfronts in hopes of diverting more advertising money into the digital space.[ ] on april , the company launched the aol on network, a single website for its video output.[ ] in february , aol reported its fourth quarter revenue of $ .  million, its first growth in quarterly revenue in years.[ ] in august , armstrong announced patch media would scale back or sell hundreds of its local news sites.[ ] not long afterwards, layoffs began, with up to out of , positions initially impacted.[ ] on january , , patch media was spun off, with majority ownership being held by hale global.[ ] by the end of , aol controlled . % of the global advertising market, well behind industry leader google's . %.[ ] on january , , aol acquired gravity, a software startup that tracked users' online behavior and tailored ads and content based on their interests, for $  million.[ ] the deal, which included roughly gravity employees and their personalization technology, was ceo tim armstrong's fourth largest deal since taking over the company in . later that year, aol also acquired vidible, which developed technology to help websites run video content from other publishers, and help video publishers sell their content to these websites. the deal, which was announced december , , was reportedly worth roughly $  million.[ ] on july , , aol earned an emmy nomination for the aol original series, the future starts here, in the news and documentary category.[ ] this came days after aol earned its first primetime emmy award nomination for park bench with steve buscemi in the outstanding short form variety series category, which later won the award.[ ] created and hosted by tiffany shlain, the series focused on human's relationship with technology and featured episodes such as the future of our species, why we love robots, and a case for optimism. – : division of verizon aol's silicon valley branch office. on may , , verizon announced plans to buy aol for $ per share in a deal valued at $ .  billion. the transaction was completed on june . armstrong, who continued to lead the firm following regulatory approval, called the deal the logical next step for aol. "if you look forward five years, you're going to be in a space where there are going to be massive, global-scale networks, and there's no better partner for us to go forward with than verizon." he said. "it's really not about selling the company today. it's about setting up for the next five to years."[ ] analyst david bank said he thought the deal made sense for verizon.[ ] the deal will broaden verizon's advertising sales platforms and increase its video production ability through websites such as huffpost, techcrunch, and engadget.[ ] however, craig moffett said it was unlikely the deal would make a big difference to verizon's bottom line.[ ] aol had about two million dial-up subscribers at the time of the buyout.[ ] the announcement caused aol's stock price to rise %, while verizon's stock price dropped slightly.[ ] shortly before the verizon purchase, on april , , aol launched one by aol, a digital marketing programmatic platform that unifies buying channels and audience management platforms to track and optimize campaigns over multiple screens.[ ] later that year, on september , aol expanded the product with one by aol: creative, which is geared towards creative and media agencies to similarly connect marketing and ad distribution efforts.[ ] on may , , aol reported its first-quarter revenue of $ .  million, $ .  million of which came from advertising and related operations, marking a % increase from q . over that year, the aol platforms division saw a % increase in revenue, but a drop in adjusted oibda due to increased investments in the company's video and programmatic platforms.[ ] on june , , aol announced a deal with microsoft to take over the majority of its digital advertising business. under the pact, as many as , microsoft employees involved with the business will be transferred to aol, and the company will take over the sale of display, video, and mobile ads on various microsoft platforms in nine countries, including brazil, canada, the united states, and the united kingdom. additionally, google search will be replaced on aol properties with bing—which will display advertising sold by microsoft. both advertising deals are subject to affiliate marketing revenue sharing.[ ][ ] on july , , aol received two news and documentary emmy nominations, one for makers in the outstanding historical programming category, and the other for true trans with laura jane grace, which documented the story of laura jane grace, a transgender musician best known as the founder, lead singer, songwriter and guitarist of the punk rock band against me!, and her decision to come out publicly and overall transition experience.[ ] on september , , aol agreed to buy millennial media for us$  million.[ ] on october , , aol completed the acquisition.[ ] on october , , go , a free ad-supported mobile video service aimed at young adult and teen viewers that verizon owns and aol oversees and operates launched its content publicly after months of beta testing.[ ][ ] the initial launch line-up included content from comedy central, huffpost, nerdist news, univision news, vice, espn and mtv.[ ] on january , , aol expanded its one platform by introducing one by aol: publishers, which combines six previously separate technologies to offer various publisher capabilities such as customizing video players, offering premium ad experience to boost visibility, and generating large video libraries.[ ] the announcement was made in tandem with aol's acquisition of alephd, a paris-based startup focused on publisher analytics of ad price tracking based on historical data.[ ] aol announced alephd would be a part of the one by aol: publishers platform.[ ] on april , , aol acquired virtual reality studio ryot to bring immersive degree video and vr content to huffpost's global audience across desktop, mobile, and apps.[ ] in july , verizon communications announced its intent to purchase the core internet business of yahoo!. verizon tentatively plans to merge aol with yahoo into a new company called "oath inc." which in january rebranded itself as verizon media.[ ] in april , oath inc. sold moviefone to moviepass parent helios and matheson analytics.[ ][ ][ ] in november the huffington post was sold to buzzfeed in a stock deal.[ ] –present: division of apollo on may , , verizon announced it would sell aol and yahoo to apollo for $ billion.[ ] products and services content this section needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. (april ) (learn how and when to remove this template message) as of , the following media brands became subsidiary of aol's parent verizon media.[ ] huffpost[ ][ ] (sold to buzzfeed in november ) engadget[ ] autoblog[ ] techcrunch[ ] cambio[ ] aol's content contributors consists of over , bloggers, including politicians, celebrities, academics, and policy experts, who contribute on a wide range of topics making news.[ ] in addition to mobile-optimized web experiences, aol produces mobile applications for existing aol properties like autoblog, engadget, the huffington post, techcrunch, and products such as alto, pip, and vivv. advertising aol has a global portfolio of media brands and advertising services across mobile, desktop, and tv. services include brand integration and sponsorships through its in-house branded content arm, partner studio by aol, as well as data and programmatic offerings through ad technology stack, one by aol. aol acquired a number of businesses and technologies help to form one by aol. these acquisitions included adaptv in and convertro, precision demand, and vidible in .[ ] one by aol is further broken down into one by aol for publishers (formerly vidible, aol on network and be on for publishers) and one by aol for advertisers, each of which have several sub-platforms.[ ][ ] on september , , aol's parent company oath consolidated yahoo brightroll, one by aol and yahoo gemini to 'simplify' adtech service by launching a single advertising proposition dubbed oath ad platforms.[ ] membership aol offers a range of integrated products and properties including communication tools, mobile apps and services and subscription packages. dial-up internet access – according to aol quarterly earnings report may , , .  million people still use aol's dial-up service.[ ] aol mail – aol mail is aol's proprietary email client. it is fully integrated with aim and links to news headlines on aol content sites. aol instant messenger (aim) – was aol's proprietary instant-messaging tool. it was released in . it lost market share to competition in the instant messenger market such as google chat, facebook messenger, and skype.[ ] it also included a video-chat service, av by aim. on december , , aol discontinued aim.[ ] aol plans – aol plans offers three online safety and assistance tools: id protection, data security and a general online technical assistance service.[ ] aol desktop aol desktop developer(s) aol initial release december  , ; years ago ( - - )[ ] stable release . [ ](windows) . (macos) / august , preview release . . / january , .[ ] written in c++ operating system microsoft windows xp or later, mac os x . . or later type internet suite license proprietary website help.aol.com/articles/aol-desktop-downloading-and-installing aol desktop is an internet suite produced by aol from [ ][ ] that integrates a web browser, a media player and an instant messenger client.[ ] version .x was based on aol openride,[ ] it is an upgrade from such.[ ] the macos version is based on webkit. aol desktop version .x was different from previous aol browsers and aol desktop versions. its features are focused on web browsing as well as email. for instance, one does not have to sign into aol in order to use it as a regular browser. in addition, non-aol email accounts can be accessed through it. primary buttons include "mail", "im", and several shortcuts to various web pages. the first two require users to sign in, but the shortcuts to web pages can be used without authentication. aol desktop version .x was late marked as unsupported in favor of supporting the aol desktop .x versions. version . was released, replacing the internet explorer components of the internet browser with cef[ ] (chromium embedded framework) to give users an improved web browsing experience closer to that of chrome version of aol desktop, currently in beta, is a total rewrite but maintains a similar user interface to the previous . .x series of releases.[ ] in , a new paid version called aol desktop gold was released, available for $ . per month after trial. it replaced the previous free version.[ ] after the shutdown of aim in , aol's original chat rooms continued to be accessible through aol desktop gold, and some rooms remained active during peak hours. that chat system was shut down on december , .[ ] in addition to aol desktop, the company also offered aol toolbar as a browser toolbar for several web browsers that provided quick access to aol services. the toolbar was available from until . criticism aol cds sent to a student dormitory in germany, in its earlier incarnation as a "walled garden" community and service provider, aol received criticism for its community policies, terms of service, and customer service. prior to , aol was known for its direct mailing of cd-roms and . -inch floppy disks containing its software. the disks were distributed in large numbers; at one point, half of the cds manufactured worldwide had aol logos on them.[ ] the marketing tactic was criticized for its environmental cost, and aol cds were recognized as pc world's most annoying tech product.[ ][ ] community leaders aol used a system of volunteers to moderate its chat rooms, forums and user communities. the program dated back to aol's early days, when it charged by the hour for access and one of its highest billing services was chat. aol provided free access to community leaders in exchange for moderating the chat rooms, and this effectively made chat very cheap to operate, and more lucrative than aol's other services of the era. there were , community leaders in .[ ] all community leaders received hours of training and underwent a probationary period. while most community leaders moderated chat rooms, some ran aol communities and controlled their layout and design, with as much as % of aol's content being created or overseen by community managers until .[ ] by , isps were beginning to charge flat rates for unlimited access, which they could do at a profit because they only provided internet access. even though aol would lose money with such a pricing scheme, it was forced by market conditions to offer unlimited access in october . in order to return to profitability, aol rapidly shifted its focus from content creation to advertising, resulting in less of a need to carefully moderate every forum and chat room to keep users willing to pay by the minute to remain connected.[ ] after unlimited access, aol considered scrapping the program entirely, but continued it with a reduced number of community leaders, with scaled-back roles in creating content.[ ] although community leaders continued to receive free access, after they were motivated more by the prestige of the position and the access to moderator tools and restricted areas within aol.[ ][ ] by , there were over , volunteers in the program.[ ] in may , two former volunteers filed a class-action lawsuit alleging aol violated the fair labor standards act by treating volunteers like employees. volunteers had to apply for the position, commit to working for at least three to four hours a week, fill out timecards and sign a non-disclosure agreement.[ ] on july , aol ended its youth corps, which consisted of underage community leaders.[ ] at this time, the united states department of labor began an investigation into the program, but it came to no conclusions about aol's practices.[ ] aol ended its community leader program on june , . the class action lawsuit dragged on for years, even after aol ended the program and aol declined as a major internet company. in , aol finally agreed to settle the lawsuit for $  million.[ ] the community leader program was found to be an example of co-production in a article in international journal of cultural studies.[ ] billing disputes aol has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. in addition, aol changed its method of calculating used minutes in response to a class action lawsuit. previously, aol would add seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for minutes and seconds would be charged for minutes). aol claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take seconds, especially when connecting via another isp). aol disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. in addition, the aol software would notify the user of exactly how long they were connected and how many minutes they were being charged. aol was sued by the ohio attorney general in october for improper billing practices. the case was settled on june , . aol agreed to resolve any consumer complaints filed with the ohio ag's office. in december , aol agreed to provide restitution to florida consumers to settle the case filed against them by the florida attorney general.[ ] account cancellation many customers complained that aol personnel ignored their demands to cancel service and stop billing. in response to approximately consumer complaints, the new york attorney general's office began an inquiry of aol's customer service policies.[citation needed] the investigation revealed that the company had an elaborate scheme for rewarding employees who purported to retain or "save" subscribers who had called to cancel their internet service. in many instances, such retention was done against subscribers' wishes, or without their consent. under the scheme, customer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service.[citation needed] for several years, aol had instituted minimum retention or "save" percentages, which consumer representatives were expected to meet. these bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers. on august , , america online agreed to pay $ .  million to the state of new york and reformed its customer service procedures. under the agreement, aol would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus.[ ] however the agreement only covered people in the state of new york.[ ] on june , , vincent ferrari documented his account cancellation phone call in a blog post,[ ] stating he had switched to broadband years earlier. in the recorded phone call, the aol representative refused to cancel the account unless the -year-old ferrari explained why aol hours were still being recorded on it. ferrari insisted that aol software was not even installed on the computer. when ferrari demanded that the account be canceled regardless, the aol representative asked to speak with ferrari's father, for whom the account had been set up. the conversation was aired on cnbc. when cnbc reporters tried to have an account on aol cancelled, they were hung up on immediately and it ultimately took more than minutes to cancel the account.[ ] on july , , aol's entire retention manual was released on the internet.[ ] on august , , time warner announced that the company would be dissolving aol's retention centers due to its profits hinging on $  billion in cost cuts. the company estimated that it would lose more than six million subscribers over the following year.[ ] direct marketing of disks some promotional cd-roms distributed in canada. cd in original mailer prior to , aol was infamous for the unsolicited mass direct mail of ⁄ " floppy disks and cd-roms containing their software. they were the most frequent user of this marketing tactic, and received criticism for the environmental cost of the campaign.[ ] according to pc world, in the s "you couldn't open a magazine (pc world included) or your mailbox without an aol disk falling out of it".[ ] the mass distribution of these disks was seen as wasteful by the public and led to protest groups. one such was no more aol cds, a web-based effort by two it workers[ ] to collect one million disks with the intent to return the disks to aol.[ ] the website was started in august , and an estimated , cds were collected by august when the project was shut down.[ ] software in , aol was served with an $  billion lawsuit alleging that its aol . software caused significant difficulties for users attempting to use third-party internet service providers. the lawsuit sought damages of up to $ for each user that had downloaded the software cited at the time of the lawsuit. aol later agreed to a settlement of $  million, without admission of wrongdoing.[ ] the aol software then was given a feature called aol dialer, or aol connect on mac os x. this feature allowed users to connect to the isp without running the full interface. this allowed users to use only the applications they wish to use, especially if they do not favor the aol browser. aol . was once identified by stopbadware as being under investigation[ ] for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. however, as of the release of aol . vr (vista ready) on january , , it was no longer considered badware due to changes aol made in the software.[ ] usenet newsgroups when aol gave clients access to usenet in , they hid at least one newsgroup in standard list view: alt.aol-sucks. aol did list the newsgroup in the alternative description view, but changed the description to "flames and complaints about america online". with aol clients swarming usenet newsgroups, the old, existing user base started to develop a strong distaste for both aol and its clients, referring to the new state of affairs as eternal september.[ ] aol discontinued access to usenet on june , .[ ] no official details were provided as to the cause of decommissioning usenet access, except providing users the suggestion to access usenet services from a third-party, google groups. aol then provided community-based message boards in lieu of usenet. terms of service (tos) aol has a detailed set of guidelines and expectations for users on their service, known as the terms of service (tos, also known as conditions of service, or cos in the uk). it is separated into three different sections: member agreement, community guidelines and privacy policy.[ ][ ] all three agreements are presented to users at time of registration and digital acceptance is achieved when they access the aol service. during the period when volunteer chat room hosts and board monitors were used, chat room hosts were given a brief online training session and test on terms of service violations. there have been many complaints over rules that govern an aol user's conduct. some users disagree with the tos, citing the guidelines are too strict to follow coupled with the fact the tos may change without users being made aware. a considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for aol.[ ][ ][ ][ ] certified email in early , aol stated its intention to implement a certified email system called goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped by spam filters. this decision drew fire from moveon, which characterized the program as an "email tax", and the electronic frontier foundation (eff), which characterized it as a shakedown of non-profits.[ ] a website called dearaol.com[ ] was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to aol's use of goodmail. esther dyson defended the move in an editorial in the new york times, saying "i hope goodmail succeeds, and that it has lots of competition. i also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. instead of the fees going to goodmail and aol, they will also be shared with the individual recipients."[ ] tim lee of the technology liberation front[ ] posted an article that questioned the electronic frontier foundation's adopting a confrontational posture when dealing with private companies. lee's article cited a series of discussions[ ] on declan mccullagh's politechbot mailing list on this subject between the eff's danny o'brien and antispammer suresh ramasubramanian, who has also compared[ ] the eff's tactics in opposing goodmail to tactics used by republican political strategist karl rove. spamassassin developer justin mason posted some criticism of the eff's and moveon's "going overboard" in their opposition to the scheme. the dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog—"aol starts the shakedown" being made on may , . comcast, who also used the service, announced on its website that goodmail had ceased operations and as of february , , they no longer used the service.[ ] search data main article: aol search data scandal on august , , aol released a compressed text file on one of its websites containing  million search keywords for over , users over a -month period between march , and may , intended for research purposes. aol pulled the file from public access by august , but not before its wide distribution on the internet by others. derivative research, titled a picture of search[ ] was published by authors pass, chowdhury and torgeson for the first international conference on scalable information systems.[ ] the data were used by websites such as aolstalker[ ] for entertainment purposes, where users of aolstalker are encouraged to judge aol clients based on the humorousness of personal details revealed by search behavior. user list exposure in , jason smathers, an aol employee, was convicted of stealing america online's  million screen names and selling them to a known spammer. smathers pled guilty to conspiracy charges in .[ ][ ] smathers pled guilty to violations of the us can-spam act of .[ ] he was sentenced in august to months in prison; the sentencing judge also recommended smathers be forced to pay $ , in restitution, triple the $ , that he sold the addresses for.[ ] aol's computer checkup "scareware" on february , , a class action lawsuit was filed against support.com, inc. and partner aol, inc. the lawsuit alleged support.com and aol's computer checkup "scareware" (which uses software developed by support.com) misrepresented that their software programs would identify and resolve a host of technical problems with computers, offered to perform a free "scan," which often found problems with users' computers. the companies then offered to sell software—for which aol allegedly charged $ . a month and support.com $ —to remedy those problems.[ ] both aol, inc. and support.com, inc. settled on may , , for $ .  million. this included $ . to each valid class member and $ , each to consumer watchdog and the electronic frontier foundation.[ ] judge jacqueline scott corley wrote: "distributing a portion of the [funds] to consumer watchdog will meet the interests of the silent class members because the organization will use the funds to help protect consumers across the nation from being subject to the types of fraudulent and misleading conduct that is alleged here," and "eff's mission includes a strong consumer protection component, especially in regards to online protection."[ ] aol continues to market computer checkup.[ ] nsa prism program following media reports about prism, nsa's massive electronic surveillance program, in june , several technology companies were identified as participants, including aol. according to leaks of said program, aol joined the prism program in .[ ] hosting of user profiles changed, then discontinued at one time, most aol users had an online "profile" hosted by the aol hometown service. when aol hometown was discontinued, users had to create a new profile on bebo. this was an unsuccessful attempt to create a social network that would compete with facebook. when the value of bebo decreased to a tiny fraction of the $  million aol paid for it, users were forced to recreate their profiles yet again, on a new service called aol lifestream. aol took the decision to shut down lifestream on february , , and gave users one month's notice to save off photos and videos that had been uploaded to lifestream.[ ] following the shutdown, aol no longer provides any option for hosting user profiles. during the hometown/bebo/lifestream era, another user's profile could be displayed by clicking the "buddy info" button in the aol desktop software. after the shutdown of lifestream, clicking "buddy info" does something that provides no information whatsoever about the selected buddy: it causes the aim home page (www.aim.com) to be displayed. see also adrian lamo – inside-aol.com aohell comparison of webmail providers david shing dot-com bubble list of acquisitions by aol list of s&p companies live sessions@aol truveo references ^ nollinger, matt (september , ). "america, online!". wired. retrieved october , . ^ "verizon buys faded internet pioneer aol for $ . bn". ^ a b c d e imbert, fred (may , ). "verizon to buy aol for $ . b; aol shares soar". cnbc. retrieved may , . ^ sawers, paul (june , ). "verizon completes $ . b acquisition of aol". venturebeat. retrieved june , . ^ a b business, jordan valinsky, cnn. "verizon offloads yahoo and aol in $ billion deal". cnn. retrieved may , . ^ a b klein, alec ( ). stealing time: davin, quinton, and the collapse of aol time warner. new york: simon & schuster. isbn  - - - - . ^ a b c d e david lumb (may , ). "a brief history of aol". fast company. retrieved may , . ^ a b c d charles warner ( ). media selling: television, print, internet, radio. john wiley & sons. isbn  - - - - . ^ peter friedman interviewed on the tv show triangulation on the twit.tv network ^ "history of computing industrial era ( – )". the history of computing project. march , . archived from the original on november , . retrieved september , . ^ "apple ii history chapter ". december , . archived from the original on august , . retrieved september , . ^ "the original neverwinter nights – ". bladekeep.com. ^ "own a modem?". computer gaming world. august . pp.  – . ^ catb.org. catb.org. retrieved on july , . ^ siegler, mg (december , ). "how much did it cost aol to send us those cds in the s? "a lot!," says steve case". techcrunch. retrieved march , . ^ michael wolff ( ). netstudy. dell publishing. ^ "aol gets excited – nov. , ". ^ "history of aol search". july , . ^ "for $ . a month, unlimited headaches for aol". january , . archived from the original on january , . ^ "encyclopedia.com". the washington post. april , . archived from the original on november , . retrieved may , . ^ "tysons corner cdp, virginia." united states census bureau. retrieved on may , . archived november , , at the wayback machine ^ sugawara, sandra. "america online to reduce rates; firm faces subscriber boycott, pressure from competitors." the washington post. october , . financial b . retrieved on may , . ^ "company overview". corp.aol.com. aol. february , . archived from the original on february , . ^ david bank ( ). breaking windows: how bill gates fumbled the future of microsoft. simon and schuster. p.  . isbn  - - - - . ^ "the fall of facebook". (december ). the atlantic, pp. . ^ "aol acquires mapquest – dec. , ". ^ tim arango (january , ). "how the aol-time warner merger went so wrong". the new york times. ^ henry, shannon (august , ). "at aol, new boss largely unknown; 'who's jon miller?' employees ask at dulles offices". wash. post. archived from the original on march , – via highbeam research. note: only the beginning of the news article was available, the remainder behind a paywall. ^ bryer, lanning; seminsky, melvin ( ). intellectual property assets in mergers and acquisitions. wiley. ^ "live ". billboard: . july , . ^ jeff bertolucci (july ). "protect your pc". kiplinger's personal finance. ^ yang, catherine (november , ). "has aol met its match?". businessweek. archived from the original on january , . retrieved august , . ^ "america online changes its name to aol". april , . archived from the original on july , . retrieved july , . ^ aol launches free software to improve pc security for all internet users. time warner. retrieved on july , . ^ "aol news and broadcast center". press.aol.com. november , . archived from the original on june , . retrieved july , . ^ "carphone warehouse buying aol uk". bbc news. october , . retrieved april , . ^ aol scraps fees in bid to keep users. usa today. retrieved on april , . ^ "aol uses refurbished software to woo customers". the money times. october , . archived from the original on november , . retrieved november , . ^ video.aol.com. video.aol.com (november , ). retrieved on july , . ^ "cityguide.aol.com". cityguide.aol.com. archived from the original on january , . retrieved april , . ^ yellowpages.aol.com archived july , , at the wayback machine. yellowpages.aol.com. retrieved on july , . ^ "local.aol.com". archived from the original on june , . retrieved july , . ^ a b c pogue, david (january , ). "fewer excuses for not doing a pc backup". the new york times. retrieved january , . quote: "online backups, where files are shuttled off to the internet for safekeeping, are suddenly becoming effortless, capacious and even free." ^ "america online, inc. announces acquisition of xdrive, inc". retrieved july , . ^ "aol begins shutdown of aol pictures, bluestring and xdrive". techcrunch. retrieved july , . ^ mills, elinor (february , ). "aol hanging up on dial-up customers?". cnet. retrieved february , . ^ "aol price plans". archived from the original on november , . retrieved november , . ^ "randy falco is the new ceo of aol". techwhack. ^ stafford, jim (december , ). "america online to close city call center". the oklahoman. ^ steel, emily (september , ). "aol moves headquarters to new york city". the wall street journal. retrieved september , . ^ goldfarb, zachary and sam diaz (september , ). "aol moving executives, headquarters to new york". the washington post. retrieved may , . ^ a b hansell, saul (october , ). "tuesday is layoff day at aol". the new york times. archived from the original on october , . retrieved october , . ^ rosencrance, linda (november , ). "aol revenue, subscribers plummet". computerworld. archived from the original on november , . retrieved november , . ^ "aol (twx): randy falco's year-end love note to aolers". archived from the original on december , . retrieved december , . ^ "crg west accounces the acquisition of data center in reston, virginia" (pdf). january , . archived (pdf) from the original on december , . retrieved november , . ^ "time warner will split aol: financial news – yahoo! finance". biz.yahoo.com. archived from the original on february , . ^ bbc news | business aol acquires bebo social network ^ "aol begins shutdown of aol pictures, bluestring and xdrive". techcrunch. retrieved july , . ^ "we're closing our doors". peopleconnectionblog.com. aol. archived from the original on october , . ^ tim armstrong named chairman and ceo of aol. aol press release. retrieved on april , . ^ time warner walking out on aol marriage. nbc news (may , ). retrieved on july , . ^ public class. "aol". wolff olins. archived from the original on november , . retrieved january , . ^ aol celebrates day one as an independent company. aol press release. retrieved on april , . ^ barnett, emma (april , ). "aol prepares to shut down bebo". telegraph.co.uk. london. retrieved april , . ^ aol sells bebo for scrap – and a $ million tax break. cnn money. retrieved on april , . ^ why is aim chat closed? archived october , , at the wayback machine. aol help. retrieved on april , . ^ swisher, kara. "back to the future: aol goes local with two acquisitions (including ceo's company)". allthingsd. retrieved june , . ^ armstrong, tim (september , ). "we got techcrunch!". techcrunch. archived from the original on october , . retrieved september , . ^ arrington, michael (september , ). "why we sold techcrunch to aol and where we go from here". techcrunch. archived from the original on september , . retrieved september , . ^ aol acquires personal profile startup about.me. techcrunch. retrieved april , . ^ online video distribution network goviral acquired by aol europe. aol press release. retrieved april , . ^ a b steel, emily (march , ). "aol completes purchase of huffington post". the wall street journal. ^ aol buys huffington post: the beginning of the end?. guardian. retrieved on july , . ^ wired.com. wired.com. retrieved on july , . ^ pepitone, julianne (march , ). "aol cuts jobs after huffpo buy". cnn. ^ "allthingsd.com". allthingsd.com. september , . retrieved january , . ^ dwyer, kate (march , ). "how you can be in the music video for michelle obama's song with zendaya and lea michele". teen vogue. ^ "aol and pbs announce 'makers: women who make america'". pbs. february , . ^ "aol and pbs announce 'makers: women who make america'". aol corp. retrieved january , . ^ aol acquires mobile photo-sharing app hipster. venturebeat. retrieved on april , . ^ aol and microsoft announce $ . billion deal. aol press release. retrieved on april , . ^ thielman, sam. "nielsen, aol chase ads with tv-like ratings web giant issues bold guarantees regarding its online grp's". adweek. retrieved april , . ^ vega, tanzia; elliott, stuart (april , ). "small screens, big dollars". the new york times. retrieved july , . ^ coyle, jake (april , ). "aol launches online video network, aol on". associated press. retrieved may , . ^ hagey, keach (february , ). "aol quarterly revenue rises for first time in years". the wall street journal. retrieved july , . ^ kafka, peter (august , ). " patch sites on the block, aol says". allthingsd. retrieved august , . ^ hagey, keach (august , ). "aol begins layoffs at patch". the wall street journal. retrieved august , . ^ kaufman, leslie (january , ). "aol finds a partner to run its troubled patch division". new york times. ^ a b c "verizon buys aol for $ . bn". bbc. may , . retrieved may , . ^ macmillan, douglas (january , ). "aol buys software startup gravity". wsj blogs - digits. retrieved january , . ^ "aol adds more video help by buying content syndicator vidible for around $ million". re/code. december . retrieved january , . ^ "aol original "the future starts here" by @tiffanyshlain nominated for news and documentary emmy award". ^ "steve buscemi gets emmy love for 'park bench'". ^ "aol corp". aol corp. retrieved january , . ^ "aol corp". aol corp. retrieved january , . ^ "quarterly earnings – investor relations – aol". ir.aol.com. retrieved february , . ^ grandoni, dino (june , ). "aol in deal with microsoft to take over display ad business". the new york times. retrieved june , . ^ shields, mike; ovide, shira (june , ). "aol takes over majority of microsoft's ad business, swaps google search for bing". the wall street journal. retrieved june , . ^ "nominees for the th annual news & documentary emmy awards announced" (pdf). archived from the original (pdf) on february , . ^ "aol confirms it is buying millennial media in $ m deal to expand in mobile ads". techcrunch. ^ "aol completes acquisition of millennial media". nasdaq. ^ a b "verizon launches go mobile video service". usa today. retrieved january , . ^ roettgers, janko (october , ). "verizon may strike partnerships with other carriers to take go international". variety. retrieved january , . ^ "aol introduces new 'all-in-one' platform for publishers". adage.com. retrieved february , . ^ ha, anthony. "aol acquires alephd to be part of its newly unified publisher platform". techcrunch. retrieved february , . ^ "aol buys alephd to boost publisher support". www.ecommercetimes.com. retrieved february , . ^ "aol acquires vr content studio ryot to bring immersive video to the huffington post". venturebeat.com. april , . retrieved april , . ^ "verizon is mashing yahoo and aol into a new company called oath". the verge. april , . retrieved april , . ^ todd spangler (april , ). "moviepass parent acquires moviefone in deal with verizon's oath". variety. ^ "helios and matheson analytics and moviepass acquire moviefone in strategic move". businesswire. april , . ^ rhett jones (april , ). "moviepass buys moviefone as it presses forward with movie theater domination". gizmodo. ^ hagey, benjamin mullin and keach (november , ). "buzzfeed to acquire huffpost in stock deal with verizon media". the wall street journal. issn  - . retrieved january , . ^ "verizon has sold mapping service mapquest as the telecom giant continues to trim its media investments". business insider. ^ "aol agrees to acquire the huffington post". huffpost. february , . ^ "aol buys blog network weblogs inc". msnbc.com. october , . retrieved june , . ^ "aol buys huffington post for $ million, arianna to head aol media". wired. retrieved july , . ^ arrington, michael. "why we sold techcrunch to aol, and where we go from here". techcrunch. retrieved december , . ^ stelter, brian. "aol teams with jonas group for mtv-style site". retrieved june , . ^ "aol products and services: content". archived from the original on july , . retrieved july , . ^ "overview". aol corp. retrieved february , . ^ "one by aol for publishers". aol platforms. retrieved february , . ^ "one by aol for advertisers". aol platforms. retrieved february , . ^ cameron clarke (september , ). "oath consolidates brightroll, one by aol and yahoo gemini to 'simplify' adtech service. launches a single advertising proposition dubbed oath ad platforms". the drum. ^ matyszczyk, chris. "more than million people still pay for aol dial-up". cnet. retrieved may , . ^ "a going-away message: aol instant messenger is shutting down". retrieved july , . ^ molina, brett (october , ). "rip aim: aol instant messenger dies in december". usa today. mclean, virginia. retrieved october , . ^ "aol membership". aol corp. retrieved february , . ^ "aol debuts new desktop software for windows". betanews. december , . retrieved january , . ^ a b "aol desktop for windows". discover aol. archived from the original on september , . retrieved september , . "we're sorry. this product is not available outside of the us or canada." ^ a b "aol beta central". retrieved january , . ^ "free software from aol - discover aol". december , . archived from the original on december , . retrieved january , . ^ "aol desktop / aol helix". october , . archived from the original on october , . retrieved january , . ^ "august th helix beta chat". aol.com. archived from the original on march , . retrieved april , . ^ "aol heads in new direction with 'helix'". betanews. july , . ^ "beta – main". beta.aol.com. retrieved july , . ^ "free aol desktop is being discontinued". ^ "aol desktop gold chat rooms to shut down on december , ". aol. retrieved june , . ^ siegler, m (december , ). "comment how much did it cost aol to send us those cds in the s? "a lot!," says steve case". tech crunch. retrieved june , . ^ dornin, rusty (october , ). "cd overload? send them back to aol". cnn tech. archived from the original on february , . retrieved october , . ^ "your top most annoying tech products". pcworld. april , . retrieved june , . ^ a b c "inside aol's "cyber-sweatshop"". wired. retrieved february , . ^ a b c d postigo, hector (september , ). "america online volunteers: lessons from an early co-production community". international journal of cultural studies. ( ): – . doi: . / . s cid  . ^ munk, nina ( ). fools rush in: steve case, jerry levin, and the unmaking of aol time warner. harpercollins. pp.  – . isbn  . ^ napoli, lisa (may , ). "former volunteers sue aol, seeking back pay for work". the new york times. p. section b, page . ^ a b "the aol chat room monitor revolt". priceonomics. retrieved february , . ^ kirchner, lauren (february , ). "aol settled with unpaid "volunteers" for $ million". columbia journalism review. ^ a b "aol". better business bureau. archived from the original on july , . retrieved january , . ^ a b tynan, dan (may , ). "the worst tech products of all time". pcworld. retrieved april , . ^ "cancelling aol". insignificantthoughts.com. june , . archived from the original on december , . retrieved november , . ^ wells, jane (june , ). "how hard can it be to cancel an aol account?". cnbc. retrieved july , . ^ popken, ben. (july , ) america online: aol retention manual uploaded in full. consumerist.com. retrieved on july , . ^ aol: timewarner dissolves aol retention centers. consumerist.com (august , ). retrieved on july , . ^ rusty dornin (october , ). "campaign: send aol cds back". cnn. ^ what to do with those aol cds, a march cbc news article ^ a b internet archive wayback machine aug archive of nomoreaolcds.com main page. web.archive.org (august , ). retrieved on july , . ^ gardencitygroup.com (pdf). retrieved on july , . ^ "stopbadware.org". archived from the original on february , . ^ stopbadware.org archived july , , at the wayback machine. stopbadware.org (january , ). retrieved on july , . ^ "the making of an underclass: aol" archived may , , at the wayback machine net.wars chapter , wendy m. grossman, nyu press, . ^ aol pulls plug on newsgroup service. betanews. retrieved on july , . ^ "aol terms of service". help.channels.aol.com/. archived from the original on november , . ^ "conditions of service". help.aol.co.uk/. archived from the original on may , . ^ aolwatch.org. aolwatch.org (july , ). retrieved on july , . ^ thetruthseeker.co.uk archived september , , at the wayback machine, the truth seeker – internet censorship ^ censorship on aol late – . fglaysher.com (march , ). retrieved on july , . ^ "electronic frontier foundation". electronic frontier foundation. archived from the original on july , . ^ cindy cohn (february , ). "aol, yahoo and goodmail: taxing your email for fun and profit". eff. ^ "stop aol's email tax". dearaol.com. archived from the original on january , . retrieved november , . ^ dyson, esther (march , ). "you've got goodmail". the new york times. retrieved july , . ^ techliberation.com archived august , , at the wayback machine. techliberation.com. retrieved on july , . ^ "debate over dearaol.com between suresh ramasubramanian and danny o'brien". politechbot.com. archived from the original on april , . retrieved november , . ^ ramasubramanian, suresh (may , ). "eff and its use of propaganda: could karl rove do better? probably". circleid. retrieved november , . ^ security.comcast.net archived march , , at the wayback machine, february , ^ ir.iit.edu archived august , , at the wayback machine. (pdf) . retrieved on july , . ^ cs.hku.hk archived september , , at the wayback machine. cs.hku.hk. retrieved on july , . ^ "aolstalker homepage". archived from the original on september , . retrieved september , . ^ "ex-aol worker who stole e-mail list sentenced". nbc news., august , ^ "pair nabbed in aol spam scheme". thesmokinggun.com. june , . ^ ex-aol employee pleads guilty in spam case cnn, february , . retrieved march , . ^ "ex-aol worker who stole e-mail list sentenced". associated press. august , . retrieved august , . ^ a b davis, wendy (may , ). "aol, support.com settle scareware lawsuit for $ . million". mediapost. retrieved october , . ^ "final judgment order for lagarde v. support.com inc". justia dockets & filings. ^ "aol computer checkup: clean & speed up your slow pc – try it free". aol inc. . retrieved october , . ^ greenwald, glenn; macaskill, ewen (june , ). "nsa prism program taps in to user data of apple, google and others". the guardian. guardian news and media limited. ^ "aol lifestream sunset notification". retrieved april , . external links wikimedia commons has media related to aol. official website v t e aol websites aol.com yahoo! search related aolserver art image file format elwood edwards one by aol oscar protocol tac ultravox verizon media yahoo! acquisitions former aim alto mail buy.at community leader program compuserve dmoz explorer fanhouse ficlets gravity hometown neverwinter nights on politics daily press propeller.com radio radio kol seed singingfish socialthing toc protocol tv v t e verizon media brands aol mail built by girls engadget flurry rivals.com ryot techcrunch verizon digital media services yahoo! former aim alto mail flickr go huffpost mapquest moviefone polyvore tumblr companies portal business and economics portal internet portal new york city portal united states portal authority control general isni other microsoft academic musicbrainz label coordinates: ° ′ ″n ° ′ ″w /  . °n . °w / . ; - . retrieved from "https://en.wikipedia.org/w/index.php?title=aol&oldid= " categories: aol establishments in the united states mergers and acquisitions companies based in dulles, virginia companies based in new york city companies formerly listed on the new york stock exchange companies in the prism network former warnermedia subsidiaries internet properties established in internet properties established in internet service providers of the united states internet services supporting openid pre–world wide web online services telecommunications companies established in telecommunications companies established in verizon media web portals web service providers hidden categories: webarchive template wayback links articles with short description short description is different from wikidata wikipedia indefinitely semi-protected pages use mdy dates from may articles needing additional references from april all articles needing additional references all articles with unsourced statements articles with unsourced statements from april commons category link from wikidata wikipedia articles with isni identifiers wikipedia articles with ma identifiers wikipedia articles with musicbrainz label identifiers coordinates not on wikidata navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read view source view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons wikinews languages العربية azərbaycanca বাংলা bân-lâm-gú Беларуская Български català Čeština dansk deutsch español esperanto فارسی français 한국어 हिन्दी bahasa indonesia Íslenska italiano עברית ಕನ್ನಡ ქართული magyar მარგალური bahasa melayu nederlands 日本語 norsk bokmål norsk nynorsk oʻzbekcha/ўзбекча polski português română Русский simple english سنڌي slovenščina کوردی suomi svenska ไทย türkçe Українська tiếng việt 吴语 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement home - duraspace.org projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter about duraspace projects services community membership support news & events projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter help us preserve and provide access to the world's intellectual, cultural and scientific heritage join us learn more latest news . . dspace press release . . fedora migration paths and tools project update: july . . vivo announces dragan ivanović as technical lead our global community the community duraspace serves is alive with ideas and innovation aimed at collaboratively meeting the needs of the scholarly ecosystem that connects us all. our global community contributes to the advancement of dspace, fedora and vivo. at the same time subscribers to duraspace services are helping to build best practices for delivery of high quality customer service. we are grateful for our community’s continued support and engagement in the enterprise we share as we work together to provide enduring access to the world’s digital heritage. join us   open source projects the fedora, dspace and vivo community-supported projects are proud to provide more than users worldwide from more than countries with freely-available open source software. fedora is a flexible repository platform with native linked data capabilities. dspace is a turnkey institutional repository application. vivo creates an integrated record of the scholarly work of your organization.   our services archivesdirect, dspacedirect, and duracloud services from duraspace provide access to institutional resources, preservation of treasured collections, and simplified data management tools. our services are built on solid open source software platforms, can be set up quickly, and are competitively priced. staff experts work directly with customers to provide personalized on-boarding and superb customer support. duracloud is a hosted service that lets you control where and how your content is preserved in the cloud. dspacedirect is a hosted turnkey repository solution. archivesdirect is a complete, hosted archiving solution.   about about duraspace history what we do board of directors meet the team policies reports community our users community programs service providers strategic partners membership values & benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter this work is licensed under a creative commons attribution . international license bulletin board system - wikipedia bulletin board system from wikipedia, the free encyclopedia jump to navigation jump to search computer server not to be confused with internet forum software. this article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. please help to improve this article by introducing more precise citations. (april ) (learn how and when to remove this template message) a welcome screen for the free-net bulletin board, from a bulletin board system or bbs (also called computer bulletin board service, cbbs[ ]) is a computer server running software that allows users to connect to the system using a terminal program. once logged in, the user can perform functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users through public message boards and sometimes via direct chatting. in the early s, message networks such as fidonet were developed to provide services such as netmail, which is similar to internet-based email. many bbses also offer online games in which users can compete with each other. bbses with multiple phone lines often provide chat rooms, allowing users to interact with each other. bulletin board systems were in many ways a precursor to the modern form of the world wide web, social networks, and other aspects of the internet. low-cost, high-performance asynchronous modems drove the use of online services and bbses through the early s. infoworld estimated that there were , bbses serving million users in the united states alone in , a collective market much larger than major online services such as compuserve. the introduction of inexpensive dial-up internet service and the mosaic web browser offered ease of use and global access that bbs and online systems did not provide, and led to a rapid crash in the market starting in . over the next year, many of the leading bbs software providers went bankrupt and tens of thousands of bbses disappeared. today, bbsing survives largely as a nostalgic hobby in most parts of the world, but it is still an extremely popular form of communication for taiwanese youth (see ptt bulletin board system).[ ] most surviving bbses are accessible over telnet and typically offer free email accounts, ftp services, irc and all the protocols commonly used on the internet. some offer access through packet switched networks or packet radio connections.[ ] contents history . precursors . the first bbses . smartmodem . higher speeds, commercialization . guis . rise of the internet and decline of bbs . estimating numbers software and hardware presentation content and access networks shareware and freeware features see also notes references . citations . sources external links history[edit] precursors[edit] a precursor to the public bulletin board system was community memory, started in august in berkeley, california. useful microcomputers did not exist at that time, and modems were both expensive and slow. community memory therefore ran on a mainframe computer and was accessed through terminals located in several san francisco bay area neighborhoods.[ ][ ] the poor quality of the original modem connecting the terminals to the mainframe prompted community memory hardware person, lee felsenstein, to invent the pennywhistle modem, whose design was highly influential in the mid- s. community memory allowed the user to type messages into a computer terminal after inserting a coin, and offered a "pure" bulletin board experience with public messages only (no email or other features). it did offer the ability to tag messages with keywords, which the user could use in searches. the system acted primarily in the form of a buy and sell system with the tags taking the place of the more traditional classifications. but users found ways to express themselves outside these bounds, and the system spontaneously created stories, poetry and other forms of communications. the system was expensive to operate, and when their host machine became unavailable and a new one could not be found, the system closed in january . similar functionality was available to most mainframe users, which might be considered a sort of ultra-local bbs when used in this fashion. commercial systems, expressly intended to offer these features to the public, became available in the late s and formed the online service market that lasted into the s. one particularly influential example was plato, which had thousands of users by the late s, many of whom used the messaging and chat room features of the system in the same way that would become common on bbses. the first bbses[edit] ward christensen holds an expansion card from the original cbbs s- host machine. early modems were generally either expensive or very simple devices using acoustic couplers to handle telephone operation. the user would first pick up the phone, dial a number, then press the handset into rubber cups on the top of the modem. disconnecting at the end of a call required the user to pick up the handset and return it to the phone. examples of direct-connecting modems did exist, and these often allowed the host computer to send it commands to answer or hang up calls, but these were very expensive devices used by large banks and similar companies. with the introduction of microcomputers with expansion slots, like the s- bus machines and apple ii, it became possible for the modem to communicate instructions and data on separate lines. these machines typically only supported asynchronous communications, and synchronous modems were much more expensive than asynchronous modems. a number of modems of this sort were available by the late s. this made the bbs possible for the first time, as it allowed software on the computer to pick up an incoming call, communicate with the user, and then hang up the call when the user logged off. the first public dial-up bbs was developed by ward christensen and randy suess. according to an early interview, when chicago was snowed under during the great blizzard of , the two began preliminary work on the computerized bulletin board system, or cbbs. the system came into existence largely through a fortuitous combination of christensen having a spare s- bus computer and an early hayes internal modem, and suess's insistence that the machine be placed at his house in chicago where it would be a local phone call for more users. christensen patterned the system after the cork board his local computer club used to post information like "need a ride". cbbs officially went online on february .[ ][ ] cbbs, which kept a count of callers, reportedly connected , callers before it was finally retired.[citation needed] smartmodem[edit] the baud smartmodem led to an initial wave of early bbs systems. a key innovation required for the popularization of the bbs was the smartmodem manufactured by hayes microcomputer products. internal modems like the ones used by cbbs and similar early systems were usable, but generally expensive due to the manufacturer having to make a different modem for every computer platform they wanted to target. they were also limited to those computers with internal expansion, and could not be used with other useful platforms like video terminals. external modems were available for these platforms but required the phone to be dialed using a conventional handset.[a] internal modems could be software-controlled to perform both outbound and inbound calls, but external modems had only the data pins to communicate with the host system. hayes' solution to the problem was to use a small microcontroller to implement a system that examined the data flowing into the modem from the host computer, watching for certain command strings. this allowed commands to be sent to and from the modem using the same data pins as all the rest of the data, meaning it would work on any system that could support even the most basic modems. the smartmodem could pick up the phone, dial numbers, and hang up again, all without any operator intervention. the smartmodem was not necessary for bbs use but made overall operation dramatically simpler. it also improved usability for the caller, as most terminal software allowed different phone numbers to be stored and dialed on command, allowing the user to easily connect to a series of systems. the introduction of the smartmodem led to the first real wave of bbs systems. limited in both speed and storage capacity, these systems were normally dedicated solely to messaging, both private email and public forums. file transfers were extremely slow at these speeds, and file libraries were typically limited to text files containing lists of other bbs systems. these systems attracted a particular type of user who used the bbs as a unique type of communications medium, and when these local systems were crowded from the market in the s, their loss was lamented for many years.[citation needed] higher speeds, commercialization[edit] speed improved with the introduction of bit/s asynchronous modems in the early s, giving way to bit/s fairly rapidly. the improved performance led to a substantial increase in bbs popularity. most of the information was displayed using ordinary ascii text or ansi art, but a number of systems attempted character-based graphical user interfaces which began to be practical at bit/s. there was a lengthy delay before  bit/s models began to appear on the market.  bit/s was not even established as a strong standard before v. bis at .  kbit/s took over in the early s. this period also saw the rapid rise in capacity and a dramatic drop in the price of hard drives. by the late s, many bbs systems had significant file libraries, and this gave rise to leeching – users calling bbses solely for their files. these users would use the modem for some time, leaving less time for other users, who got busy signals. the resulting upheaval eliminated many of the pioneering message-centric systems.[ ] this also gave rise to a new class of bbs systems, dedicated solely to file upload and downloads. these systems charged for access, typically a flat monthly fee, compared to the per-hour fees charged by event horizons bbs and most online services. many third-party services were developed to support these systems, offering simple credit card merchant account gateways for the payment of monthly fees, and entire file libraries on compact disk that made initial setup very easy. early s editions of boardwatch were filled with ads for single-click install solutions dedicated to these new sysops. while this gave the market a bad reputation, it also led to its greatest success. during the early s, there were a number of mid-sized software companies dedicated to bbs software, and the number of bbses in service reached its peak. towards the early s, bbs became so popular that it spawned three monthly magazines, boardwatch, bbs magazine, and in asia and australia, chips 'n bits magazine which devoted extensive coverage of the software and technology innovations and people behind them, and listings to us and worldwide bbses.[ ] in addition, in the us, a major monthly magazine, computer shopper, carried a list of bbses along with a brief abstract of each of their offerings. guis[edit] through the late s and early s, there was considerable experimentation with ways to improve the bbs experience from its command-line interface roots.[colloquialism?] almost every popular system improved matters somewhat by adding ansi-based color menus to make reading easier, and most also allowed cursor commands to offer command-line recall and similar features. another common feature was the use of autocomplete to make menu navigation simpler, a feature that would not re-appear on the web until decades later. a number of systems also made forays into gui-based interfaces, either using character graphics sent from the host, or using custom gui-based terminal systems. the latter initially appeared, unsurprisingly, on the macintosh platform, where telefinder and firstclass became very popular. firstclass offered a host of features that would be difficult or impossible under a terminal-based solution, including bi-directional information flow and non-blocking operation that allowed the user to exchange files in both directions while continuing to use the message system and chat, all in separate windows. skypix featured on amiga a complete markup language. it used a standardized set of icons to indicate mouse driven commands available online and to recognize different filetypes present on bbs storage media. it was capable of transmitting data like images, audio files, and audio clips between users linked to the same bbs or off-line if the bbs was in the circuit of the fidonet organization. on the pc, efforts were more oriented to extensions of the original terminal concept, with the gui being described in the information on the host. one example was the remote imaging protocol, essentially a picture description system, which remained relatively obscure. probably the ultimate development of this style of operation was the dynamic page implementation of the university of southern california bbs (uscbbs) by susan biddlecomb, which predated the implementation of the html dynamic web page. a complete dynamic web page implementation was accomplished using tbbs with a tdbs add-on presenting a complete menu system individually customized for each user. rise of the internet and decline of bbs[edit] the demand for complex ansi and ascii screens and larger file transfers taxed available channel capacity, which in turn increased demand for faster modems. . kbit/s modems were standard for a number of years while various companies attempted to introduce non-standard systems with higher performance – normally about . kbit/s. another delay followed due to a long v. standards process before .  kbit/s was released, only to be quickly replaced by .  kbit/s, and then  kbit/s. these increasing speeds had the side effect of dramatically reducing the noticeable effects of channel efficiency. when modems were slow, considerable effort was put into developing the most efficient protocols and display systems possible. running a general-purpose protocol like tcp/ip over a bit/s modem was a painful experience.[tone] with kbit/s modems, however, the overhead was so greatly reduced as to be unnoticeable. dial-up internet service became widely available in , and a must-have option for any general-use operating system by . these developments together resulted in the sudden obsolescence of bulletin board technology in and the collapse of its supporting market. technically, internet service offered an enormous advantage over bbs systems, as a single connection to the user's internet service provider allowed them to contact services around the world. in comparison, bbs systems relied on a direct point-to-point connection, so even dialing multiple local systems required multiple phone calls. moreover, internet protocols allowed that same single connection to be used to contact multiple services at the same time; for example, downloading files from an ftp library while checking the weather on a local news website. in comparison, a connection to a bbs allowed access only to the information on that system. estimating numbers[edit] according to the fidonet nodelist, bbses reached their peak usage around , which was the same period that the world wide web and aol became mainstream. bbses rapidly declined in popularity thereafter, and were replaced by systems using the internet for connectivity. some of the larger commercial bbses, such as maxmegabyte and execpc bbs, evolved into internet service providers. the website textfiles.com serves as an archive that documents the history of the bbs. the historical bbs list on textfiles.com contains over , bbses that have existed over a span of years in north america alone.[ ] the owner of textfiles.com, jason scott, also produced bbs: the documentary, a dvd film that chronicles the history of the bbs and features interviews with well-known people (mostly from the united states) from the heyday bbs era. in the s, most traditional bbs systems migrated to the internet using telnet or ssh protocols. between and are thought to be active in  – fewer than of these being of the traditional "dial-up" (modem) variety. software and hardware[edit] amiga running a two-line bbs unlike modern websites and online services that are typically hosted by third-party companies in commercial data centers, bbs computers (especially for smaller boards) were typically operated from the system operator's home. as such, access could be unreliable, and in many cases, only one user could be on the system at a time. only larger bbses with multiple phone lines using specialized hardware, multitasking software, or a lan connecting multiple computers, could host multiple simultaneous users. the first bbses used homebrew[clarification needed] software,[b] quite often written or customized by the system operators themselves, running on early s- bus microcomputer systems such as the altair , imsai and cromemco under the cp/m operating system. soon after, bbs software was being written for all of the major home computer systems of the late s era – the apple ii, atari -bit family, commodore and trs- being some of the most popular. a few years later, in , ibm introduced the first dos based ibm pc, and due to the overwhelming popularity of pcs and their clones, dos soon became the operating system on which the majority of bbs programs were run. rbbs-pc, ported over from the cp/m world, and fido bbs, developed by tom jennings (who later founded fidonet) were the first notable dos bbs programs. many successful commercial bbs programs were developed for dos, such as pcboard bbs, remoteaccess bbs, and wildcat! bbs. some popular freeware bbs programs for dos included telegard bbs and renegade bbs, which both had early origins from leaked wwiv bbs source code. there were several dozen other bbs programs developed over the dos era, and many were released under the shareware concept, while some were released as freeware including iniquity. bbs systems on other systems remained popular, especially home computers, largely because they catered to the audience of users running those machines. the ubiquitous commodore (introduced in ) was a common platform in the s. popular commercial bbs programs were blue board, ivory bbs, color and cnet . in the early s, a small number of bbses were also running on the commodore amiga. popular bbs software for the amiga were abbs, amiexpress, c-net, stormforcebbs, infinity and tempest. there was also a small faction of devoted atari bbses that used the atari , then the xl, and eventually the st. the earlier machines generally lacked hard drive capabilities, which limited them primarily to messaging. ms-dos continued to be the most popular operating system for bbs use up until the mid- s, and in the early years, most multi-node bbses were running under a dos based multitasker such as desqview or consisted of multiple computers connected via a lan. in the late s, a handful of bbs developers implemented multitasking communications routines inside their software, allowing multiple phone lines and users to connect to the same bbs computer. these included galacticomm's majorbbs (later worldgroup), esoft the bread board system (tbbs), and falken. other popular bbs's were maximus and opus, with some associated applications such as binkleyterm being based on characters from the berkley breathed cartoon strip of bloom county. though most bbs software had been written in basic or pascal (with some low-level routines written in assembly language), the c language was starting to gain popularity. by , many of the dos-based bbses had begun switching to modern multitasking operating systems, such as os/ , windows , and linux. one of the first graphics-based bbs applications was excalibur bbs with low-bandwidth applications that required its own client for efficiency. this led to one of the earliest implementations of electronic commerce in with replication of partner stores around the globe. tcp/ip networking allowed most of the remaining bbses to evolve and include internet hosting capabilities. recent bbs software, such as synchronet, mystic bbs, elebbs, doc or wildcat! bbs, provide access using the telnet protocol rather than dialup, or by using legacy dos-based bbs software with a fossil-to-telnet redirector such as netfoss. presentation[edit] welcome screen of neon# bbs (tornado) bbses were generally text-based, rather than gui-based, and early bbses conversed using the simple ascii character set. however, some home computer manufacturers extended the ascii character set to take advantage of the advanced color and graphics capabilities of their systems. bbs software authors included these extended character sets in their software, and terminal program authors included the ability to display them when a compatible system was called. atari's native character set was known as atascii, while most commodore bbses supported petscii. petscii was also supported by the nationwide online service quantum link.[c] the use of these custom character sets was generally incompatible between manufacturers. unless a caller was using terminal emulation software written for, and running on, the same type of system as the bbs, the session would simply fall back to simple ascii output. for example, a commodore user calling an atari bbs would use ascii rather than the native character set of either. as time progressed, most terminal programs began using the ascii standard, but could use their native character set if it was available. coconet, a bbs system made by coconut computing, inc., was released in and only supported a gui (no text interface was initially available but eventually became available around ), and worked in ega/vga graphics mode, which made it stand out from text-based bbs systems. coconet's bitmap and vector graphics and support for multiple type fonts were inspired by the plato system, and the graphics capabilities were based on what was available in the borland graphics interface library. a competing approach called remote imaging protocol (rip) emerged and was promoted by telegrafix in the early to mid- s but it never became widespread. a teletext technology called naplps was also considered, and although it became the underlying graphics technology behind the prodigy service, it never gained popularity in the bbs market. there were several gui-based bbses on the apple macintosh platform, including telefinder and firstclass, but these were mostly confined to the mac market. in the uk, the bbc micro based obbs software, available from pace for use with their modems, optionally allowed for color and graphics using the teletext based graphics mode available on that platform. other systems used the viewdata protocols made popular in the uk by british telecom's prestel service, and the on-line magazine micronet whom were busy giving away modems with their subscriptions. over time, terminal manufacturers started to support ansi x . in addition to or instead of proprietary terminal control codes, e.g., color, cursor positioning. the most popular form of online graphics was ansi art, which combined the ibm extended ascii character set's blocks and symbols with ansi escape sequences to allow changing colors on demand, provide cursor control and screen formatting, and even basic musical tones. during the late s and early s, most bbses used ansi to make elaborate welcome screens, and colorized menus, and thus, ansi support was a sought-after feature in terminal client programs. the development of ansi art became so popular that it spawned an entire bbs "artscene" subculture devoted to it. bbs ansi login screen example the amiga skyline bbs software was the first in featuring a script markup language communication protocol called skypix which was capable of giving the user a complete graphical interface, featuring rich graphics, changeable fonts, mouse-controlled actions, animations and sound.[ ] today, most bbs software that is still actively supported, such as worldgroup, wildcat! bbs and citadel/ux, is web-enabled, and the traditional text interface has been replaced (or operates concurrently) with a web-based user interface. for those more nostalgic for the true bbs experience, one can use netserial (windows) or dosbox (windows/*nix) to redirect dos com port software to telnet, allowing them to connect to telnet bbses using s and s era modem terminal emulation software, like telix, terminate, qmodem and procomm plus. modern -bit terminal emulators such as mtelnet and syncterm include native telnet support. content and access[edit] since most early bbses were run by computer hobbyists, they were typically technical in topic, with user communities revolving around hardware and software discussions. as the bbs phenomenon grew, so did the popularity of special interest boards. bulletin board systems could be found for almost every hobby and interest. popular interests included politics, religion, music, dating, and alternative lifestyles. many system operators also adopted a theme in which they customized their entire bbs (welcome screens, prompts, menus, and so on) to reflect that theme. common themes were based on fantasy, or were intended to give the user the illusion of being somewhere else, such as in a sanatorium, wizard's castle, or on a pirate ship. in the early days, the file download library consisted of files that the system operators obtained themselves from other bbses and friends. many bbses inspected every file uploaded to their public file download library to ensure that the material did not violate copyright law. as time went on, shareware cd-roms were sold with up to thousands of files on each cd-rom. small bbses copied each file individually to their hard drive. some systems used a cd-rom drive to make the files available. advanced bbses used multiple cd-rom disc changer units that switched cd-rom disks on demand for the caller(s). large systems used all dos drive letters with multi-disk changers housing tens of thousands of copyright-free shareware or freeware files available to all callers. these bbses were generally more family-friendly, avoiding the seedier side of bbses. access to these systems varied from single to multiple modem lines with some requiring little or no confirmed registration. some bbses, called elite, warez or pirate boards, were exclusively used for distributing cracked software, phreaking, and other questionable or unlawful content. these bbses often had multiple modems and phone lines, allowing several users to upload and download files at once. most elite bbses used some form of new user verification, where new users would have to apply for membership and attempt to prove that they were not a law enforcement officer or a lamer. the largest elite boards accepted users by invitation only. elite boards also spawned their own subculture and gave rise to the slang known today as leetspeak. another common type of board was the support bbs run by a manufacturer of computer products or software. these boards were dedicated to supporting users of the company's products with question and answer forums, news and updates, and downloads. most of them were not a free call. today, these services have moved to the web. some general-purpose bulletin board systems had special levels of access that were given to those who paid extra money, uploaded useful files or knew the system operator personally. these specialty and pay bbses usually had something unique to offer their users, such as large file libraries, warez, pornography, chat rooms or internet access. pay bbses such as the well and echo nyc (now internet forums rather than dial-up), execpc, psudnetwork and mindvox (which folded in ) were admired for their tight-knit[colloquialism] communities and quality discussion forums. however, many free bbses also maintained close knit communities, and some even had annual or bi-annual events where users would travel great distances to meet face-to-face with their on-line friends. these events were especially popular with bbses that offered chat rooms. some of the bbses that provided access to illegal content faced opposition. on july , , in conjunction with a credit card fraud investigation, the middlesex county, new jersey sheriff's department raided and seized the private sector bbs, which was the official bbs for grey hat hacker quarterly magazine at the time.[ ] the notorious rusty n edie's bbs, in boardman, ohio, was raided by the fbi in january for trading unlicensed software, and later sued by playboy for copyright infringement in november . in flint, michigan, a -year-old man was charged with distributing child pornography through his bbs in march .[ ] networks[edit] most early bbses operated as individual systems. information contained on that bbs never left the system, and users would only interact with the information and user community on that bbs alone. however, as bbses became more widespread, there evolved a desire to connect systems together to share messages and files with distant systems and users. the largest such network was fidonet. as is it was prohibitively expensive for the hobbyist system operator to have a dedicated connection to another system, fidonet was developed as a store and forward network. private email (netmail), public message boards (echomail) and eventually even file attachments on a fidonet-capable bbs would be bundled into one or more archive files over a set time interval. these archive files were then compressed with arc or zip and forwarded to (or polled by) another nearby node or hub via a dialup xmodem session. messages would be relayed around various fidonet hubs until they were eventually delivered to their destination. the hierarchy of fidonet bbs nodes, hubs, and zones was maintained in a routing table called a nodelist. some larger bbses or regional fidonet hubs would make several transfers per day, some even to multiple nodes or hubs, and as such, transfers usually occurred at night or in the early morning when toll rates were lowest. in fido's heyday, sending a netmail message to a user on a distant fidonet node, or participating in an echomail discussion could take days, especially if any fidonet nodes or hubs in the message's route only made one transfer call per day. fidonet was platform-independent and would work with any bbs that was written to use it. bbses that did not have integrated fidonet capability could usually add it using an external fidonet front-end mailer such as seadog, frontdoor, binkleyterm, intermail or d'bridge, and a mail processor such as fastecho or squish. the front-end mailer would conduct the periodic fidonet transfers, while the mail processor would usually run just before and just after the mailer ran. this program would scan for and pack up new outgoing messages, and then unpack, sort and "toss" the incoming messages into a bbs user's local email box or into the bbs's local message bases reserved for echomail. as such, these mail processors were commonly called "scanner/tosser/packers". many other bbs networks followed the example of fidonet, using the same standards and the same software. these were called fidonet technology networks (ftns). they were usually smaller and targeted at selected audiences. some networks used qwk doors, and others such as relaynet (rime) and wwivnet used non-fido software and standards. before commercial internet access became common, these networks of bbses provided regional and international e-mail and message bases. some even provided gateways, such as ufgate, by which members could send and receive e-mail to and from the internet via uucp, and many fidonet discussion groups were shared via gateway to usenet. elaborate schemes allowed users to download binary files, search gopherspace, and interact with distant programs, all using plain-text e-mail. as the volume of fidonet mail increased and newsgroups from the early days of the internet became available, satellite data downstream services became viable for larger systems. the satellite service provided access to fidonet and usenet newsgroups in large volumes at a reasonable fee. by connecting a small dish and receiver, a constant downstream of thousands of fidonet and usenet newsgroups could be received. the local bbs only needed to upload new outgoing messages via the modem network back to the satellite service. this method drastically reduced phone data transfers while dramatically increasing the number of message forums. fidonet is still in use today, though in a much smaller form, and many echomail groups are still shared with usenet via fidonet to usenet gateways. widespread abuse of usenet with spam and pornography has led to many of these fidonet gateways to cease operation completely. shareware and freeware[edit] main article: shareware much of the shareware movement was started via user distribution of software through bbses. a notable example was phil katz's pkarc (and later pkzip, using the same ".zip" algorithm that winzip and other popular archivers now use); also other concepts of software distribution like freeware, postcardware like jpegview and donationware like red ryder for the macintosh first appeared on bbs sites. doom from id software and nearly all apogee software games were distributed as shareware (apogee is, in fact, credited for adding an order form to a shareware demo).[citation needed] the internet has largely erased the distinction of shareware – most users now download the software directly from the developer's website rather than receiving it from another bbs user 'sharing' it. today shareware is commonly used to mean electronically-distributed software from a small developer. many commercial bbs software companies that continue to support their old bbs software products switched to the shareware model or made it entirely free. some companies were able to make the move to the internet and provide commercial products with bbs capabilities. features[edit] a classic bbs had: a computer one or more modems one or more phone lines, with more allowing for increased concurrent users a bbs software package a sysop – system operator a user community the bbs software usually provides: menu systems one or more message bases uploading and downloading of message packets in qwk format using xmodem, ymodem or zmodem file areas live viewing of all caller activity by the system operator voting – opinion booths statistics on message posters, top uploaders / downloaders online games (usually single player or only a single active player at a given time) a doorway to third-party online games usage auditing capabilities multi-user chat (only possible on multi-line bbses) internet email (more common in later internet-connected bbses) networked message boards most modern bbses allow telnet access over the internet using a telnet server and a virtual fossil driver. a "yell for sysop" page caller side menu item that sounded an audible alarm to the system operator. if chosen, the system operator could then initiate a text-to-text chat with the caller. primitive social networking features, such as leaving messages on a user's profile see also[edit] internet portal ansi art bbs: the documentary computer magazine free-net imageboard internet forum internet relay chat kom (bbs) list of bbs software list of bulletin board systems minitel online magazine podsnet shell account terminal emulator textboard user-generated content usenet warez notes[edit] ^ technically they could have used an automatic calling unit, but that was not economically viable.[citation needed] ^ cbbs chicago (which ward christensen programmed) was about , lines of assembler. ^ quantum link and parts of applelink went on to become america online. references[edit] citations[edit] ^ a b frank . derfler. jr. ( - - ). "dial up directory". kilobaud microcomputing magazine. retrieved - - . ^ "thinking chinese - chinese bbs – the social activity that never grows old". thinkingchinese.com. retrieved april . ^ crosby, kip (november ). "convivial cybernetic devices: from vacuum tube flip-flops to the singing altair - an interview with lee felsenstein (part )" (pdf). the analytical engine. computer history association of california. ( ): . issn  - . ^ crosby, kip (february ). "computers for their own sake: from the dompier music to the computer faire - an interview with lee felsenstein (part )" (pdf). the analytical engine. computer history association of california. ( ): . issn  - . ^ christensen, ward; suess, randy (november ). "hobbyist computerized bulletin board system" (pdf). byte. vol.  no.  . peterborough, nh: byte publications. pp.  – . archived (pdf) from the original on january , . retrieved february , . the computerized hobbyist bulletin board system ... was conceived, designed, built, programmed, tested, and installed in a day period (january , to february , ) by the two of us. alt url ^ collection of memories of writing and running the first bbs by ward christensen (circa ), bbsdocumentary.com, retrieved june , ^ "file sponges, the bbs nightmare" archived - - at the wayback machine, chips 'n bits ^ chips 'n' bits : the northern territory computer users' newsletter, catalogue.nla.gov.au, retrieved march , ^ "the textfiles.com bbs list". bbslist.textfiles.com. retrieved - - . ^ scott lee. "bbsdocumentary, an overview of bbs programs". jason scott for wired magazine (?). retrieved - - . ^ this day in geek history: july , thegreatgeekmanual.com, retrieved march , ^ doran, tim ( - - ). "man says kiddie porno made computer site popular". the flint journal. sources[edit] jones, steve ( ). encyclopedia of new media: an essential reference to communication and technology. isbn  - - - . gross, larry p.; woods, james d.; woods, professor james d. ( ). the columbia reader on lesbians and gay men in media, society, and politics. isbn  - - - . rathbone, tina ( ). modems for dummies. isbn  - - - . haas, lou ( ). going on-line with your micro. tab books. isbn  - - - . university of michigan (october – september ). "compute". compute! publications. cite journal requires |journal= (help) cane, mike ( ). the computer phone book. new american library. christians in a .com world: getting connected without being consumed. isbn  - - - . pippen, patrick (july ). beam me up scottie. isbn  - - - . external links[edit] bulletin board systemat wikipedia's sister projects definitions from wiktionary media from wikimedia commons data from wikidata the bbs corner the bbs documentary – (video collection) bbsmates community and resource site (archive from ) the telnet bbs guide textfiles.com – collection of historical bbs documents, files and history the bbs organization (longest running bbs services site) the lost civilization of dial-up bulletin board systems (the atlantic, ) bulletin board systems at curlie v t e bulletin board systems list of bulletin board systems list of bbs software list of bbs door games culture ansi art chat room file sharing protocols timeline mud sysop virtual community technologies ansi escape code bbs door internet outdial remote imaging protocol skypix networks fidonet relaynet wwivnet media coverage bbs: the documentary boardwatch computer shopper textfiles.com people ward christensen randy suess chuck forsberg tom jennings steve punter jason scott v t e computer-mediated communication online chat online discussion communication software collaborative software social network service virtual learning environment asynchronous conferencing email electronic mailing list fidonet usenet internet forum textboard imageboard shoutbox bulletin board system online guestbook synchronous conferencing data conferencing instant messaging internet relay chat lan messenger talker videoconferencing voice over ip voice chat in online gaming web chat web conferencing publishing blog microblogging wiki authority control national libraries united states japan other microsoft academic retrieved from "https://en.wikipedia.org/w/index.php?title=bulletin_board_system&oldid= " categories: american inventions bulletin board systems online chat pre–world wide web online services internet forums virtual communities computer-mediated communication telephony telnet computer-related introductions in hidden categories: all articles with unsourced statements articles with unsourced statements from july webarchive template wayback links articles with short description short description matches wikidata articles lacking in-text citations from april all articles lacking in-text citations articles with unsourced statements from november wikipedia articles with style issues from july all articles with style issues wikipedia articles needing clarification from july articles with unsourced statements from december cs errors: missing periodical pages using sister project links with hidden wikidata pages using sister project links with default search articles with curlie links wikipedia articles with lccn identifiers wikipedia articles with ndl identifiers wikipedia articles with ma identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية বাংলা català Čeština dansk deutsch eesti español esperanto فارسی français 한국어 hrvatski bahasa indonesia italiano עברית lietuvių lombard magyar nederlands 日本語 norsk bokmål polski português română Русский slovenčina slovenščina Српски / srpski srpskohrvatski / српскохрватски suomi svenska ไทย türkçe Українська 中文 edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement willie sutton - wikipedia willie sutton from wikipedia, the free encyclopedia jump to navigation jump to search for the american football player, see will sutton. american bank robber willie sutton fbi ten most wanted fugitive charges bank robbery description born william francis sutton jr. ( - - )june , brooklyn, new york died november , ( - - ) (aged  ) spring hill, florida status added march , caught february number captured william francis sutton jr. (june , – november , ) was an american bank robber.[ ] during his forty-year robbery career he stole an estimated $ million, and he eventually spent more than half of his adult life in prison and escaped three times. for his talent at executing robberies in disguises, he gained two nicknames, "willie the actor" and "slick willie". sutton is also known as the namesake of the so-called sutton's law, although he denied originating it.[ ] contents early life career in crime personal life and death "sutton's law" references further reading external links early life[edit] sutton was born into an irish-american family on june , in brooklyn, new york to william francis sutton sr., a blacksmith, and mary ellen bowles.[ ] his family lived on the corner of gold and nassau streets in the neighborhood of irishtown, brooklyn, now called vinegar hill. according to his biography, where the money was, at the age of three the family relocated to high street. his mother was, according to the biography, born in ireland; however, according to the u.s. census, she was born in maryland and her parents were born in ireland. by , she had given birth to five children, of whom three were still alive. according to the census, his maternal grandfather, james bowles, and his two maternal uncles were also living with the family. sutton was the fourth of five children, and did not attend school after the th grade.[ ][ ] career in crime[edit] sutton became a criminal at an early age, though throughout his professional criminal career, he did not kill anyone. he was described by mafioso donald frankos as "a little bright-eyed guy, just ' " and always talking, chain-smoking ... cigarettes with bull durham tobacco." frankos stated also that sutton "dispensed mounds of legal advice" to any convict willing to listen. inmates considered sutton a "wise old head" in the prison population. when incarcerated at "the tombs" (manhattan house of detention) he did not have to worry about assault because mafia friends protected him. in conversation with donald frankos he would sadly reminisce about the s and s when he was most active in robbing banks and would always tell fellow convicts that in his opinion, during the days of al capone and charles luciano, better known as lucky luciano, the criminals were the bloodiest. gangsters from the time period, and many incarcerated organized crime inmates, enjoyed having sutton for companionship. he was witty and non-violent. frankos declared that sutton made legendary bank thieves jesse james and john dillinger seem like amateurs.[ ] sutton was an accomplished bank robber. he usually carried a pistol or a thompson submachine gun. "you can't rob a bank on charm and personality," he once observed. in an interview in the reader's digest published shortly before his death, sutton was asked if the guns that he used in his robberies were loaded. he responded that he never carried a loaded gun because somebody might get hurt. he stole from the rich and kept it, though public opinion later made him into a type of gentleman thief, like robin hood. he allegedly never robbed a bank when a woman screamed or a baby cried.[ ] sutton was captured and recommitted in june , charged with assault and robbery. he failed to complete his -year sentence however, escaping on december , , using a smuggled gun and holding a prison guard hostage. with the guard as leverage, sutton acquired a -ft ( . meter) ladder to scale the -ft ( meter) wall of the prison grounds.[ ] on february , , sutton attempted to rob the corn exchange bank and trust company in philadelphia, pennsylvania. he came in disguised as a postman, but an alert passerby foiled the crime. sutton escaped. on january , , he and two companions broke into the same bank through a skylight.[citation needed] the fbi record observes: sutton also conducted a broadway jewelry store robbery in broad daylight, impersonating a postal telegraph messenger. sutton's other disguises included a police officer, messenger and maintenance man. he usually arrived at banks or stores shortly before they opened for business. sutton was apprehended on february , , and was sentenced to serve to years in the eastern state penitentiary in philadelphia, pennsylvania, for the machine gun robbery of the corn exchange bank. on april , , sutton was one of convicts who escaped the institution through a tunnel. the convicts broke through to the other side during daylight hours, and were spotted immediately by a passing police patrol. the men were forced to quickly flee the scene, with all being quickly apprehended.[ ] sutton was recaptured the same day by philadelphia police officer mark kehoe. sentenced to life imprisonment as a fourth time offender, sutton was transferred to the philadelphia county prison, holmesburg section of philadelphia, pennsylvania. on february , , sutton and other prisoners dressed as prison guards carried two ladders across the prison yard to the wall after dark. when the prison's searchlights hit him, sutton yelled, "it's all right!" no one stopped him.[ ] on march , , sutton was the eleventh listed of the fbi's fbi ten most wanted fugitives, created only a week earlier, on march .[citation needed] during february , sutton was captured by police after having been recognized on a subway and followed by arnold schuster, a -year-old brooklyn clothing salesman and amateur detective. schuster later appeared on television and described how he had assisted in sutton's apprehension. albert anastasia, mafia boss of the gambino crime family, disliked schuster because he was a "rat" and a "squealer". according to mafia renegade and first major government informant, joe valachi, anastasia ordered the murder of schuster, who was then shot dead outside his home on march , . judge peter t. farrell presided over a trial in which sutton was convicted of the robbery of $ , (equal to $ , presently) from a bank of the manufacturers trust company in sunnyside, queens. he received a sentence of to years in attica state prison.[ ] sutton in in december of , farrell ruled that sutton's good behavior, along with his deteriorating health, justified commuting his sentence to time served. at the hearing sutton responded, "thank you, your honor. god bless you," and wept as he was led out of the court building. during , a separate -years-to-life sentence given in brooklyn during [clarify][ ] after his release, sutton delivered lectures on prison reform and consulted with banks on theft-deterrent techniques. he made a television commercial for new britain bank and trust company in connecticut for their credit card with picture identification on it. his lines were, "they call it the 'face card.' now when i say i'm willie sutton, people believe me."[ ] personal life and death[edit] sutton married louise leudemann in . she divorced him while he was in jail. their daughter jeanie was born the next year. his second wife was olga kowalska, whom he married during . his longest period of (legal) employment lasted for months.[citation needed] a series of decisions by the united states supreme court during the s resulted in his release on christmas eve, , from attica state prison. he was in ill health at the time, suffering from emphysema and in need of an operation on the arteries of his legs.[citation needed] sutton died in at the age of ; before this, he had spent his last years with his sister in spring hill, florida.[ ] he frequented the spring hill restaurant where he kept to himself. after sutton's death, his family arranged a quiet burial in brooklyn in the family plot. "sutton's law"[edit] main article: sutton's law a famous apocryphal story is that sutton was asked by reporter mitch ohnstad why he robbed banks. according to ohnstad, he replied, "because that's where the money is". the quote evolved into sutton's law, which is often invoked to medical students as a metaphor for emphasizing the most likely diagnosis, rather than wasting time and money investigating every conceivable possibility. in his autobiography, sutton denied originating the pithy rejoinder: the irony of using a bank robber's maxim as an instrument for teaching medicine is compounded, i will now confess, by the fact that i never said it. the credit belongs to some enterprising reporter who apparently felt a need to fill out his copy. i can't even remember where i first read it. it just seemed to appear one day, and then it was everywhere. if anybody had asked me, i'd have probably said it. that's what almost anybody would say ... it couldn't be more obvious. however, he also said: why did i rob banks? because i enjoyed it. i loved it. i was more alive when i was inside a bank, robbing it, than at any other time in my life. i enjoyed everything about it so much that one or two weeks later i'd be out looking for the next job. but to me the money was the chips, that's all.[ ] the redlands daily facts published the earliest documented example of sutton's law on march , in redlands, california.[ ] a corollary, the "willie sutton rule," used in management accounting, stipulates that activity-based costing (in which activities are prioritized by necessity, and budgeted accordingly) should be applied where the greatest costs occur, because that is where the greatest savings can be found.[ ] references[edit] ^ "willie sutton". federal bureau of investigation. retrieved - - . ^ a b c sutton w, linn e: where the money was: the memoirs of a bank robber. viking press ( ), p. . isbn  x ^ henstell, bruce ( ). "sutton, willie ( - ), bank robber". american national biography (online ed.). new york: oxford university press. doi: . /anb/ .article. . (subscription required) ^ n.y.s. census ^ u.s. census ^ hoffman, william; headley, lake ( ). contract killer: the explosive story of the mafia's most notorious hitman -- donald "tony the greek" frankos. new york city: thunder's mouth press. pp.  – . isbn  . ^ walsh, anthony; jorgensen, cody ( ). criminology: the essentials. sage publications. isbn  . ^ "sing sing's notorious escapes". ^ linn, edward; sutton, william ( ). where the money was: the memoirs of a bank robber (library of larceny). broadway books. isbn  . ^ pace, eric (november , ). "peter t. farrell, ; judge who presided at the sutton trial". the new york times. accessed october , . ^ staff (december , ). "a jail term lifted, sutton cries in joy". the new york times. ^ "business: willie sutton, bankers' friend". time. october , . ^ lawrence block ( ). gangsters, swindlers, killers, and thieves: the lives and crimes of fifty american villains. oup usa. p.  . isbn  . ^ mikkelson, david (november , ). "willie sutton – 'that's where the money is'". snopes. retrieved july , . ^ cost and effect, kaplan, r.s. and cooper, r., harvard business school press, boston ma, , isbn  - - - further reading[edit] hoffman, william; headley, lake ( ). contract killer: the explosive story of the mafia's most notorious hitman -- donald "tony the greek" frankos. new york city: thunder's mouth press. isbn  . moehringer, jr. sutton. thorndike press ( ). isbn  . (biography) duffy, peter (february , ). "willie sutton, urbane scoundrel". new york times. external links[edit] fbi website entry on william sutton 'willie sutton is dead at ', new york times obituary, november , authority control general isni viaf worldcat national libraries united states netherlands other faceted application of subject terminology retrieved from "https://en.wikipedia.org/w/index.php?title=willie_sutton&oldid= " categories: fbi ten most wanted fugitives births deaths th-century american criminals american bank robbers american people convicted of assault american people convicted of robbery american people of irish descent bank robbers criminals from new york (state) deaths from emphysema disease-related deaths in florida fugitives people from brooklyn people from spring hill, florida sing sing hidden categories: pages containing links to subscription-only content articles with short description short description matches wikidata all articles with unsourced statements articles with unsourced statements from march all wikipedia articles needing clarification wikipedia articles needing clarification from december wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with nta identifiers wikipedia articles with fast identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages deutsch español עברית Русский tiếng việt edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement gender equality – coact about what is coact? our ethical values partners advisory board r&i actions mental health care youth employment environmental justice open calls on gender equality resources toolkits publications readings project reports communication materials citizen social science css school the community get involved blog and events menu menu about what is coact? our ethical values partners advisory board r&i actions mental health care youth employment environmental justice open calls on gender equality resources toolkits publications readings project reports communication materials citizen social science css school the community get involved blog and events menu gender equality coact open calls on gender equality contents about the open calls why should i apply how to apply timeline and deadlines contact and faq read the call in german, polish or czech we are launching three open calls to foster cso-led citizen social science projects on gender equality. any non-profit organisation registered in the eligible countries can apply. if you are a cso working on gender equality, apply between july st and sept th, . before applying, we recommend that you read the applicant guide and review the application form. apply here download the applicant guide review the application form about the open calls coact is launching a call for proposals, inviting civil society initiatives to apply for our cascading grants with max. . ,- euro to conduct citizen social science research on the topic of gender equality. a maximum of four ( ) applicants will be selected across three ( ) different open calls. applications from a broad range of backgrounds are welcome, including feminist, lgtbq+, none-binary and critical masculinity perspectives. gender equality & sustainable cities and communities csos in berlin & brandenburg area gender equality & decent work and economic growth csos in eastern europe gender equality & opportunities and risks of digitalization international csos in the eu we understand citizen social science as participatory research co-designed or directly driven by citizen groups that share a particular social concern. in coact projects citizens act as co-researchers throughout the entire research process and are recognized as in-the-field competent experts being equal actors in all phases. citizen science in general, and our open calls in particular, are relevant to civic organisations which incorporate citizen engagement, community building or any kind of collective action as part of their projects. why should i apply coact will provide funding for a research project ( months max), alongside dedicated activities, resources and tools to set up and run the research project. coact will provide a research mentoring program for your team. in collaborative workshops you will be supported to co-design and explore available tools, working together with the coact team to achieve your goals. coact will connect you to a community of people and initiatives, tackling similar challenges and contributing to common aims. you will have the opportunity to discuss your projects with the other grantees and, moreover, are invited to join coact´s broader citizen social science network. you should apply if you: are an ongoing citizen social science project looking for support, financial and otherwise, to grow and become sustainable; are a community interested in co-designing research to generate new knowledge about gender equality topics, broadly defined; are a not-for-profit organization focusing on community building, increasing the visibility of specific communities, increasing civic participation, and being interested in exploring the use of citizen social science in your work. how to apply all the information presented here can be found in the guide for applicants. to apply for any of the three open calls, you should: be a not-for-profit organization, legally registered and operating in the european union select the open call relevant to your work and interest verify the specific eligibility criteria of the open call use the online form to submit your application below you can find descriptions of each open call, including the specific eligibility criteria: open call “sustainable cities and communities” (sdg ) is addressing initiatives in the berlin and brandenburg region that aim for making cities inclusive, safe, resilient and sustainable for all its inhabitants. proposals should examine gender inequalities in affordable housing and/or urban planning as well as projects that promotes social, economic, environmental sustainability through community building around the topic of gender equality in its broadest sense. eligibility criteria: – applicant should be a non-profit – the candidate organisation should be registered in the berlin and brandenburg region. open call “decent work and economic growth” (sdg ) is addressing organisations in eastern europe. whereas women in the eu earn on average over % less per hour than men this figure becomes is even higher in eastern europe countries. trans and intersexual and none-binary people are facing even harder forms of discrimination regarding their work opportunities (ec ). eligibility criteria: – applicant should be a non-profit – the candidate organisation should be registered in one of the following countries: bulgaria, croatia, czech republic, estonia, hungary, latvia, lithuania, poland, romania, slovakia, slovenia. open call “opportunities and risks of digitalization” is open to international civic organisations operating in the eu. it has been pointed out that digital spaces are gendered spaces which hinder for example the participation of young women and that digital norms are exacerbated online (eige ). proposals should address the issues related to gender inequalities in online spaces, due, in part, to issues such as the gender dynamics of online platforms and the exposure to online harassment. eligibility criteria: applicant should be a non-profit the candidate organisation should have operations in at least two european countries. the candidate organisation should be registered in a member country of the european union. click here to apply timeline and deadlines opening date: july st , : am gmt closing date: september th , : am gmt contact and faq to contact us for questions and clarifications, send an email to opencalls@coactproject.eu general what is coact? coact stands for co-designing citizen social science for collective action. it is a research project that explores the field of citizen social science funded by the european union’s horizon research and innovation programme (grant agreement no. ). coact proposes a new understanding of citizen social science (css) as participatory research co-designed and directly driven by citizen groups sharing a social concern. coact aims to provide and further develop methodologies supporting an understanding of research that can equally be led by academic researchers or citizen groups. doing so, the project seeks to create an environment that provides a more equal “seat at the table” in process, which are oftentimes dominated by academic researchers. coact is running three so-called research and innovation actions (r&i actions) in which citizens act as co-researchers, actively participating in all phases of the research, from the design to the interpretation of the results and their transformation into concrete actions. simultaneously, with the coact open call provides funding for citizen groups to lead their own participatory research, inviting academic researchers in. who are the partners of the coact consortium? coact is a transdisciplinary collaboration of research institutions and civil society organisations. the consortium brings together experts from different disciplines and fields of practice, such as participatory action research, computational social science, citizen science, research policy and development, digital transformation, social movement studies and participatory development communication. for further information check this link: https://coactproject.eu/partners/. what is citizen social science? we understand citizen social science as participatory research co-designed or directly driven by citizen groups that share a particular social concern. in coact’s r&i actions citizens act as co-researchers throughout the entire research process and are recognized as in-the-field competent experts being equal actors in all phases. in the co-designed research, the citizens explore their lived experiences regarding the specific social concerns that motivate the research actions. in these r&i actions, we focus on the topics of mental health care, youth employment and environmental justice and gender equality. such an approach enables them to address pressing social issues from the bottom up, embedded in their social contexts. co-designed research provides the foundation for socially robust evidence-based knowledge that strives for sustainable impact and social change. why an coact open call? coact’s open call seeks to move beyond its own co-research activities and invite further actors to benefit from the project and its support mechanisms. we want to support civic organizations in making use of css methods and best practices in their own projects, directly from within civil society. civil society organizations are directly dealing with specific social topics of concern and are mostly organized around these. therefore, the coact open call seeks to connect to expert work at the grassroots level to explore the opportunities and challenges of citizen-led research. why a call on gender equality? gender equality is an ongoing major societal topic that constantly affects our daily life. the united nations made “gender equality” the fifth sustainable development goal (sdg) and define it as follows: ( ) end all forms of discrimination against all women and girls everywhere; ( ) eliminate all forms of violence against all women and girls in the public and private spheres, including trafficking and sexual and other types of exploitation. in coact, we take sdg as the starting point for this open call but we want to consider gender equality in a wider and inclusive manner, including all perspectives and collectives, such as lgbtq+ communities for example. all perspectives related to any perceived gender identity, including non-binary ones, are thus welcome. the covid- pandemic has clearly brought the strongly rooted traditional role patterns in our system to light again, particularly regarding care work. simultaneously, we are witnessing new manifestations and visibilities, and—at least in some locations—more attention from policy and society of the different feminist and lgbtq+ movements with claims for equity appearing in various forms, for example in huge demonstrations ( , people in barcelona on th of march of ), the #metoo movement or also intersectional movements like black lives matter. there is a vast variety of different attempts to tackle the social construction and structural embeddedness of gender inequality and many types of actors can play a relevant role. movements range from demands for a women’s quota in decision making positions, to human rights movements against discrimination and violence up to more radical transformative approaches that criticize the basic exclusionary foundations of capitalism. from our perspective, citizen social science can represent a powerful grassroots approach to this global issue. in our understanding of citizen social science, citizens in vulnerable situations need to be at the centre of the research cycle, defining the focus on a specific social issue. this way, unprecedented scientific data related to gender inequalities could be collected, possibly leading to new scientific evidence-informed reactions and the proposal of new collective actions or policymaking. therefore, we want to invite civil society organizations to apply for a short-term grant to investigate issues with a citizen social science approach. funded projects will receive financial backing as well as support via mentoring by partners from coact, including academic researchers, global networks, ngos and others. application process how long is the application process open? the application process is open for three ( ) months from st july to th september ( am gmt). to whom is the open call aiming to? coact’s open calls are inviting: (a) ongoing citizen social science projects looking for support, financial and otherwise, to grow and become sustainable; (b) communities interested in co-designing research to generate new knowledge about gender equality topics; (c) organizations in the third sectors that focus on community building, increasing the visibility of specific communities, increasing civic participation and who are interested in exploring the use of citizen social science in their work. the funding is available to ​legal entities​ and ​consortia​ established in a country or territory eligible to receive horizon grants. only organizations legally registered and operating in an eu member state or associated country are eligible for funding from coact. for consortia of different organisations, all participants must be eligible. in this case, the participants also need to choose a research project lead, which will submit the application and engage with coact on behalf of the consortium. every entity is allowed to participate in one application, either on its own or as part of a consortium as described above​. coact has the following conflict of interest policy: ​immediate family, domestic and non-domestic partners and those with financial ties to members of the coact consortium members are prohibited to apply. if you have a prior relationship with anyone contributing to coact that you feel may constitute a conflict of interest, please email ​opencalls@coactproject.eu for clarification. what is provided by coact? coact will provide: (a) funding for a research project ( months max), alongside dedicated activities, resources and tools to set up and run the research project; (b) a research mentoring program for your team. in collaborative workshops you will be supported to co-design and explore available tools working together with the coact team to achieve your goals; connections to a community of people and initiatives, tackling similar challenges and contributing to common aims. you will have the opportunity to discuss your projects with the other grantees and moreover are invited to join coact´s broader citizen social science network. what are the topics of the open call? in the coact open calls gender equality is combined with three secondary thematic topics that are corresponding with specific regional foci: “sustainable cities and communities”, berlin and brandenburg area, germany “decent work and economic growth”, eastern europe (bulgaria, croatia, czech republic, estonia, hungary, latvia, lithuania, poland, romania, slovakia, slovenia) “opportunities and risks of digitalization”, across all eu countries. consequently, applicants will have to show the relevance of their project to both gender equality and the specific focus of the open call of their choice. it is possible for one organization to apply to several open calls, for different projects. how much funding is available? the funding will be set at a maximum of € , for call and , for which only one applicant will be selected. call will select two proposals, which will share the € , grant, with a maximum of € , for a single organisation. the funding can be spent on salaries, equipment, consumables, travel, subcontracting to other entities, and indirect expenditure (calculated as % of the total direct costs ), in accordance with horizon guidelines. what is funded? the budget you submit will have to include different cost categories, which are explained below. there is a general distinction between direct costs, subcontracting, and indirect costs (also known as overheads). indirect costs are calculated at % of the direct costs; no indirect costs can be charged on subcontracting. all costs, except for purchased equipment, can be booked to the project’s budget covered by the grant. indirect costs, which are charged on top of the total direct costs, should be included. all costs should be stated inclusive of any irrecoverable vat. direct costs personnel applicants can spend coact funds on the staff directly involved in the execution of the project. equipment equipment with a useful life in excess of the project duration can only be reimbursed to the extent the asset would be depreciated for the ten-month project period. therefore, the standard rate allowed under the contracted project will be % of the total costs of the asset for a ten-month period. indirect costs can be applied to the % of costs charged to the project. the costs of equipment rental for the project period can be charged at full cost, as long as the rental cost is not greater than the depreciation cost had the equipment been purchased. consumables, other goods and services applicants can spend on consumables and other goods and services (including travel) if they are directly relevant for the achievement of the project. there is no hard-and-fast rule about the distinction between the equipment and other costs; small items such as moderation cards may be budgeted as ‘other goods and services. subcontracting applicants may subcontract some of their activities to other parties as long as they are also from an h eligible country. no indirect costs (overhead) can be charged on subcontracting costs. note that we expect the applicant to carry out most of the tasks of the project – subcontracting cannot be used to carry out key tasks in the project. indirect costs indirect costs are within the € , or € , limit and cover items such as rent, admin, printing, photocopying, amenities etc. these costs are eligible if they are declared on the basis of the flat rate of % of the eligible costs, from which are excluded: costs of subcontracting and costs of in-kind contributions provided by third parties which are not used on the applicant’s premises. how can i apply? submission is done online via an online form available on the open call page of the coact website​. applicants will be asked to describe their project proposal but also a series of questions about their eligibility to apply for funding, and their ability to conduct the research project. only complete applications submitted before the deadline will be considered for review. all information provided must be in english. before applying, we recommend that you read the applicant guide and review the application form. how many projects will be funded? a maximum of four ( ) projects will be funded. what is the expected outcome of the research projects? each research project is expected to provide a final report of the findings of the research at the end of the project. furthermore, results can also manifest as videos, manuals, handbooks, exhibitions etc. will be funded. eligibility who can apply? the funding is available to ​legal entities​ and ​consortia​ established in a country or territory eligible to receive horizon grants. only organizations legally registered and operating in an eu member state or associated country are eligible for funding from coact. every entity is allowed to participate, either on its own or as part of a consortium​. can individuals apply? no, individuals cannot apply. can consortia apply? for consortia of different organisations, the lead organisation must be eligible. consortium members need to choose a project lead, which will submit the application and engage with coact on behalf of the consortium. which costs are eligible? the € , grant may be spent only on eligible costs. these are costs that meet the following criteria: – incurred by the applicant in connection with or during the project; – identifiable and verifiable in the applicant’s accounts; – compliant with national law; – reasonable, justified, in accordance with sound financial management (economy and efficiency); – indicated in the budget you submitted with the short proposal. coact will provide training and guidance to all funded projects on financial matters. there is a general distinction between direct costs, subcontracting, and indirect costs (also known as overheads). indirect costs are calculated at % of the direct costs; no indirect costs can be charged on subcontracting. all costs, except for purchased equipment, can be booked to the project’s budget covered by the grant. indirect costs, which are charged on top of the total direct costs, should be included. all costs should be stated inclusive of any irrecoverable vat. submission can i submit more than one application? yes, you can submit one application for each call. does coact offer a pre-proposal check? unfortunately, we cannot offer a pre-proposal check. can i submit an application if i am already receiving funds from another public programme? yes, you can but activities you plan to carry out with coact cannot receive double funding​. synergies with other sources of funding, including other horizon projects, are encouraged if the grants are used for complementary, not overlapping purposes. can i submit documents that are not in english? no, all submitted documents must be written in english. can i change my application once it was submitted? once submitted, you cannot change your application because we start immediately with the review. can i apply for all three calls? yes, you can apply for all three calls with different proposals. responsible research and innovation who keeps the intellectual property rights? by default, you will be the sole owner of the results and outcomes of your project and all associated intellectual property. however, we expect all proposals to follow an open approach, sharing results and experiences widely with the community, as in any eu project. we will only accept proposals with a well-articulated plan that includes an open data approach. in addition, coact or the european commission may ask you to present your work as part of our public relations and networking events, to showcase and discuss the benefits and challenges of the coact approach. what happens with the data? applicants will have to be clear in their proposal about the data that they expect to collect, generate and manage through the project. the processing of that data should follow the general data protection regulation (gdpr). as noted earlier, we will only accept proposals that are committed to making their data, methods and outputs publicly available for reuse, following an open science approach. for that coact will provide technical, legal and operational support to successful applicants. in addition, coact will require citizen social science projects funded through the programme to collect, manage and share data with and for the coact team for co-evaluation purposes. the specifics of the data to be collected will be defined with each project team. do i need to share everything openly? yes, within the limits of data protection laws. we will only accept proposals that are committed to making their data, methods and outputs publicly available for reuse, following an open science approach. for that coact will provide technical, legal and operational support to successful applicants. in addition, coact will require citizen social science projects funded through the programme to collect, manage and share data with and for the coact team for co-evaluation purposes. the specifics of the data to be collected will be defined with each project team. what about ethics? coact expects all successful applicants to the open calls to follow the responsible research and innovation guidelines set by the european commission for the horizon -programme: projects will be expected to: – ensure the informed consent of any human participants, who should be provided clear and concise instructions on what is expected of them, what personal or sensitive data will be gathered about them and how they can request the deletion of this data. – ensure adequate data protection practises are in place to securely store any data gathered by or about volunteers including, but not limited to: pseudonymisation, anonymisation and aggregation of data; encryption and use of secure servers. although selected grantees will be provided support to make sure that their project follows the responsible research and innovation guidelines, we will favour applicants who are able to identify and outline remediation strategies in response to the potential ethical challenges that their project may face. selection process how do we select applications? for all applications we follow the same selection procedure: ( ) eligibility check of the organisation and the proposal; ( ) review and shortlist of the proposals by at least two reviewers according to the selection criteria on a -point scale; ( ) interview with shortlisted applicants between th and th october ; ( ) decision for applicants acceptance by the reviewers and coact team latest until th october ; ( ) negotiation including due diligence checks, work plan and budget agreement, ( ) coact facilitation for accepted applicants during the ten-month project by the coact team. who is reviewing the applications? applications will be reviewed by a panel made of coact team members and external experts, selected for the relevance of their work to each open call. what are the review criteria? idea relevance to the call: does the proposal match the focus of the specific call it was submitted to? does it include activities compatible with citizen social science? project design: are the planned activities realistic given the proposed budget and time constraints? does the scope and complexity of the project match the profiles of the project’s team? impact link to a broader agenda: is the project linked to a broader agenda/programme carried out by the applicant? does the project have a chance to continue beyond the length of the programme? is the idea reusable by other organizations working on similar topics? does the project present a participatory evaluation and impact assessment strategy? documentation & dissemination: which documentation strategies are planned? is there a commitment to publish the data and results? where would the results of the projects be disseminated? ethics & safety ethical considerations: are there any ethical considerations relevant to the project, and if so how are they taken into account? is there a clear commitment to data protection and anonymization where relevant? how does the applicant plan to ensure that their activities are as inclusive as possible? health & safety: if the project requires physical meetings, what processes are put in place to protect the health of participants in the contest of the covid pandemic? when will i know if i am shortlisted? you will be contacted by the coact team between st and th october . applicants who were not shortlisted will be informed at this stage as well. what will be topics of the interview? the interview will be an opportunity for the applicant to expand on their written application and answer questions that it may have raised. will i receive feedback? yes, we will provide feedback to applicants to improve their project. unfortunately, due to the high number of applications anticipated, we will have a very limited capacity to reply to any queries on unsuccessful applications. when will be the interviews? the interviews will take place between th and th october . shortlisted candidates will be proposed several interview slot options in order to make it possible for everyone to attend. if none of the slots are possible for the applicant, we will be forced to reject the application. can i reschedule the interview?can i reschedule the interview? once you have agreed on a date it is not possible to reschedule the interview. please ensure that at least two people are available for the interview, in order to ensure that at least once can attend the interview. when will i know if my application was approved/rejected? all applicants will be informed about whether they got accepted or rejected latest by th october . costs, payment and legal how much funding is available for each project? the funding will be set at a maximum of € , for call and , for which only one applicant will be selected. call will select two proposals, which will share the , € grant, with a maximum of , € for a single organisation. for what can i spend the funding? the funding can be spent on salaries, equipment, consumables, travel, subcontracting to other entities, and indirect expenditure (calculated as % of the total direct costs), in accordance with ​horizon guidelines​. how are payments scheduled? you will receive two payments—one at the beginning of the project and a second one when the coact team has reviewed the interim project report after the first five-month. is subcontracting allowed? yes, subcontracting is allowed. what do i have to do to receive the final payment? you would have to show that you are proceeding with the project in an interim project report to be delivered to the coact team after the first five month of the research. the coact project has received funding from the european union´s horizon research and innovation programme under grant agreement number this work is licensed under a creative commons attribution-noncommercial . international license. menu about what is coact? our ethical values partners advisory board r&i actions mental health care youth employment environmental justice open calls on gender equality resources toolkits publications readings project reports communication materials citizen social science css school the community get involved blog and events we use cookies on our website to remember your preferences in your visits. by clicking “accept”, you consent to the use of all the cookies. read morecookie settingsrejectaccept manage cookies close privacy overview this website uses cookies to improve your experience while you navigate through the website. out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. we also use third-party cookies that help us analyze and understand how you use this website. these cookies will be stored in your browser only with your consent. you also have the option to opt-out of these cookies. but opting out of some of these cookies may affect your browsing experience. necessary necessary always enabled necessary cookies are absolutely essential for the website to function properly. these cookies ensure basic functionalities and security features of the website, anonymously. cookie duration description cookielawinfo-checbox-analytics months this cookie is set by gdpr cookie consent plugin. the cookie is used to store the user consent for the cookies in the category "analytics". cookielawinfo-checbox-functional months the cookie is set by gdpr cookie consent to record the user consent for the cookies in the category "functional". cookielawinfo-checbox-others months this cookie is set by gdpr cookie consent plugin. the cookie is used to store the user consent for the cookies in the category "other. cookielawinfo-checkbox-advertisement year the cookie is set by gdpr cookie consent to record the user consent for the cookies in the category "advertisement". cookielawinfo-checkbox-necessary months this cookie is set by gdpr cookie consent plugin. the cookies is used to store the user consent for the cookies in the category "necessary". cookielawinfo-checkbox-performance months this cookie is set by gdpr cookie consent plugin. the cookie is used to store the user consent for the cookies in the category "performance". csrftoken year this cookie is associated with django web development platform for python. used to help protect the website against cross-site request forgery attacks viewed_cookie_policy months the cookie is set by the gdpr cookie consent plugin and is used to store whether or not user has consented to the use of cookies. it does not store any personal data. functional functional functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features. performance performance performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. cookie duration description _gat minute this cookies is installed by google universal analytics to throttle the request rate to limit the colllection of data on high traffic sites. g year this cookie is set by the provider eventbrite. this cookie is used for delivering content based on the user's interest. it also helps in event booking purposes. ysc session this cookies is set by youtube and is used to track the views of embedded videos. analytics analytics analytical cookies are used to understand how visitors interact with the website. these cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc. cookie duration description _ga years this cookie is installed by google analytics. the cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. the cookies store information anonymously and assign a randomly generated number to identify unique visitors. _gid day this cookie is installed by google analytics. the cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. the data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form. kampyle_userid year this cookie is set by the provider medallia. this cookie is used for storing a user id. kampylesessionpagecounter year this cookie is set by the provider medallia. this cookie is used for counting the number of pages a user views within a single sessions. kampyleusersession year this cookie is set by the provider medallia. this cookie is used for setting an unique id for the specific session. kampyleusersessionscount year this cookie is set by the provider medallia. this cookie is used for counting the number of sessions per user. advertisement advertisement advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. these cookies track visitors across websites and collect information to provide customized ads. cookie duration description ide year days used by google doubleclick and stores information about how the user uses the website and any other advertisement before visiting the website. this is used to present users with ads that are relevant to them according to the user profile. nid months this cookie is used to a profile based on user's interest and display personalized ads to the users. test_cookie minutes this cookie is set by doubleclick.net. the purpose of the cookie is to determine if the user's browser supports cookies. visitor_info _live months days this cookie is set by youtube. used to track the information of the embedded youtube videos on a website. others others other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet. cookie duration description _pk_id. .f a year days no description _pk_ses. .f a minutes no description -pgfuset day no description consent years months days hours minutes no description mgref year no description wjvhktnguczpdphx day no description wpautoterms_cache_detector session no description save & accept powered by events x english català english deutsch español terms and conditions - privacy policy floyd collins - wikipedia floyd collins from wikipedia, the free encyclopedia jump to navigation jump to search floyd collins born ( - - )july , auburn, kentucky united states died c. february , ( - - ) (aged  ) cave city, kentucky, united states resting place mammoth cave baptist church cemetery, mammoth cave, kentucky occupation cave owner, cave explorer known for cave exploration in central kentucky; being trapped in sand cave and dying before a rescue party could get to him william floyd collins (july , – c. february , ), better known as floyd collins, was an american cave explorer, principally in a region of central kentucky that houses hundreds of miles of interconnected caverns within mammoth cave national park, the longest cave system in the world. in the early th century, in an era known as the kentucky cave wars,[ ] commercial cave owners and explorers in kentucky entered into a bitter competition to exploit the bounty of caves for commercial profit from tourists, who paid to see the caves. in , collins had discovered and commercialized crystal cave on flint ridge (now part of the mammoth cave system but at the time an isolated cave). but the cave was remote and visitors were few. collins had an ambition to find another cave he could open to the public closer to the main roads, and entered into an agreement with a neighbor to open up sand cave, a small cave on the neighbor's property. on january , , while working to enlarge the small passage in sand cave, collins became trapped in a narrow crawlway feet (  m) below ground. the rescue operation to save collins became a national newspaper sensation and one of the first major news stories to be reported using the new technology of broadcast radio. after four days, during which time rescuers were able to bring water and food to collins, a rock collapse in the cave closed the entrance passageway, stranding him in the cave, except for voice contact, for more than two weeks. collins died of thirst and hunger compounded by exposure through hypothermia after being isolated for days, just three days before a rescue shaft reached his position. collins' body was recovered two months later. although collins was an unknown figure in his lifetime, the fame he gained from his death led to him being memorialized on his tombstone as the "greatest cave explorer ever known". [ ] contents early life crystal cave cave accident . media attention . burial and exhibition of body later exploration in popular culture see also references external links early life[edit] this section does not cite any sources. please help improve this section by adding citations to reliable sources. unsourced material may be challenged and removed. (september ) (learn how and when to remove this template message) william floyd collins was born in auburn, logan county, kentucky, the son and third child of leonidas collins and martha jane burnett. collins had five brothers, james, floyd (a brother with same name), andy lee, marshall everett and homer larkin, as well as two sisters, anna and nellie. crystal cave[edit] sand cave, in the "cave country", of central kentucky this section does not cite any sources. please help improve this section by adding citations to reliable sources. unsourced material may be challenged and removed. (september ) (learn how and when to remove this template message) in the period of kentucky history known as the "cave wars," the floyd collins family owned their own cave called crystal cave, a tourist show cave in the karst region of mammoth cave. crystal cave attracted a low number of tourists due to its remote location. collins hoped to find another entrance to the mammoth cave or possibly an unknown cave along the road to mammoth cave and draw more visitors and greater profits. he made an agreement with three farmers, who owned land closer to the main highway. if he found a cave, they would form a business partnership and share in the responsibilities of operating this tourist attraction. working alone, within three weeks, he had explored and expanded a hole that would later be called "sand cave" by the news media. cave accident[edit] this section needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. (december ) (learn how and when to remove this template message) on january , , after several hours of work, floyd collins managed to squeeze through several narrow passageways; he claimed he had discovered a large grotto chamber, though this was never verified. because his lamp was dying, he had to leave quickly before losing all light to the chamber, but became trapped in a small passage on his way out. collins accidentally knocked over his lamp, putting out the light, and was caught by a rock from the cave ceiling, pinning his left leg. the falling rock weighed  pounds; rescuers were unable to reach the rock to remove it. floyd collins was trapped  feet (  m) from the entrance. after being found the next day by a friend, crackers were sent to him and an electric light was run down the passage to provide him lighting and some warmth. collins survived for more than a week while rescue efforts were organized. on february , the cave passage collapsed in two places. rescue leaders, led by henry st. george tucker carmichael, determined the cave impassable and too dangerous and began to dig a shaft to reach the chamber behind collins.[ ] the -foot (  m) shaft and subsequent lateral tunnel intersected the cave just above collins, but when he was finally reached on february , he was already dead from exposure. because he could not be reached from behind, the rescuers could not free his leg. they left his body in place and filled the shaft with debris. a doctor estimated he had died three or four days before he was reached, with february the most likely date. media attention[edit] newspaper reporter william burke "skeets" miller from the courier-journal in louisville reported on the rescue efforts from the scene. miller, of small stature, was able to remove a lot of earth from around collins. he also interviewed collins in the cave, receiving a pulitzer prize for his coverage[ ] and playing a part in collins' attempted rescue. miller's reports were distributed by telegraph and were printed by newspapers across the country and abroad, and the rescue attempts were followed by regular news bulletins on the new medium of broadcast radio (the first broadcast radio station kdka having been established in ). shortly after the media arrived, the publicity drew crowds of tourists to the site, at one point numbering in the tens of thousands. vendors set up stalls to sell food and souvenirs, creating a circus-like atmosphere. the sand cave rescue attempt grew to become the third-biggest media event between the world wars. (the biggest media events of that time both involved charles lindbergh — the trans-atlantic flight and his son's kidnapping — and lindbergh actually had a minor role in the sand cave rescue, too, having been hired to fly photographic negatives from the scene for a newspaper.)[ ] since the nearest telegraph station was in cave city, some miles from the cave, two amateur radio operators with the callsigns brk and chg provided the link to pass messages to the authorities and the press.[ ] burial and exhibition of body[edit] floyd collins' grave, with epitaph with collins's body remaining in the cave, funeral services were held on the surface. homer collins was not pleased with sand cave as his brother's grave, and two months later, he and some friends reopened the shaft. they dug a new tunnel to the opposite side of the cave passage and recovered floyd collins's remains on april , .[ ] the following day, the body was buried in the burial ground of the collins family's farm,[ ] near crystal cave, now known as "floyd collins crystal cave." in , floyd collins' father, lee collins, sold the homestead and cave. the new owner placed collins' body in a glass-topped coffin and exhibited it in crystal cave for many years.[ ][ ] on the night of march – , , the body was stolen. the body was later recovered, having been found in a nearby field, but the injured left leg was missing.[ ][ ] after this desecration, the remains were kept in a secluded portion of crystal cave in a chained casket. in , crystal cave was purchased by mammoth cave national park and closed to the public.[ ] the collins family had objected to collins' body being displayed in the cave and, at their request, the national park service re-interred him at mammoth cave baptist church cemetery, mammoth cave, kentucky in .[ ][ ] it took a team of men three days to remove the casket and tombstone from the cave. later exploration[edit] this section does not cite any sources. please help improve this section by adding citations to reliable sources. unsourced material may be challenged and removed. (september ) (learn how and when to remove this template message) the attention over the rescue attempt of collins created interest in the creation of mammoth cave national park, of which sand cave is now a part. fear and superstition kept cavers away from sand cave for decades. the national park service has sealed the entrance with a steel grate for public safety. expeditions into mammoth cave showed that portions of mammoth actually run under sand cave, but no connection has ever been discovered. in the s, cave explorer and author roger brucker and a small group entered sand cave to conduct research for a book about collins. the team surveyed sand and discovered an opening in the collapsed tunnel through which a smaller caver can crawl, showing that it would have been possible to feed and heat collins after february , . they proceeded as far as the passage where collins was trapped; it was choked with gravel and unsafe to excavate. in april , george crothers led an archaeological investigation that documented many artifacts in the cave. these were removed for future preservation. in popular culture[edit] this section needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. (december ) (learn how and when to remove this template message) the life and death of collins inspired the musical floyd collins by adam guettel and tina landau,[ ] as well as one film documentary, several books, a museum and many short songs. ace in the hole (alternative title, the big carnival) is a film by billy wilder based on the media circus surrounding the attempted rescue of a man stuck in a cave. the film depicts a fictional incident, but collins is mentioned by name in the dialogue. he is mentioned in two novels by kentucky writers robert penn warren and james still: the cave and river of earth. in , actor billy bob thornton optioned the film rights to trapped! the story of floyd collins and a screenplay was adapted by thornton's writing partner, tom epperson. however, thornton's option expired and the film rights were acquired by producer peter r. j. deyell in .[ ] fiddlin' john carson and vernon dalhart recorded "the death of floyd collins"[ ] in . kentucky-based rock band black stone cherry has a song titled "the ghost of floyd collins" on their album, folklore and superstition. john prine and mac wiseman released a song titled "death of floyd collins," written by andrew jenkins, on their album standard songs for average people. floyd collins is mentioned in mark z. danielewski's postmodern novel house of leaves (p of the pantheon books nd edition). the story of floyd collins is also included in the novel supernatural: the usual sacrifices by yvonne navarro.[ ] the poet clark coolidge's book own face (united artists) features a photo of floyd collins on the cover and a number of poems in the book make reference to collins, caving, and related matters. sun & moon press published an edition of own face in . the story of floyd collins' death is mentioned on the television show the blacklist. see also[edit] s portal moose river disaster, mine cave-in covered extensively on radio in references[edit] ^ cave wars - mammoth cave national park (u.s. national park service) ^ murray & brucker, robert & roger ( ). trapped. g.p. putnams sons. isbn  . ^ "cave floor expands and entombs collins". journal and courier. february , . p.  . retrieved december , – via newspapers.com. ^ a b c d e f g bukro, casey (march , ). "folk hero's burial ends generations of anguish". chicago tribune. p.  . retrieved december , – via newspapers.com. ^ desotto, clinton: meters & down - the story of amateur radio, - american radio relay league p. isbn  - - - - ^ a b c d bukro, casey (march , ). "folk hero's burial ends generations of anguish ". chicago tribune. p.  . retrieved december , – via newspapers.com. ^ "blackfriars goes undergrsound for new musical". democrat and chronicle. may , . p.  . retrieved december , – via newspapers.com. ^ "floyd collins book acquired by producer peter r.j. deyell". broadway world. - - . retrieved - - . ^ "the death of floyd collins (edison blue amberol: )". . ^ navarro, yvonne ( ). supernatural: the usual sacrifices. titan books. p.  . isbn  - - - - . brucker, r. and murray, r. trapped! the story of floyd collins, university press of kentucky, . isbn  - - - collins, homer. the life and death of floyd collins, as told to jack lehrberger, cave books, . isbn  - - - "supernatural: the usual sacrifices" by yvonne navarro, titan books, isbn  external links[edit] wikimedia commons has media related to floyd collins. tragedy at sand cave (u.s. national park service) cave wars - mammoth cave national park (u.s. national park service) the tragedy of history's smallest underground war authority control general viaf worldcat national libraries united states other faceted application of subject terminology social networks and archival context retrieved from "https://en.wikipedia.org/w/index.php?title=floyd_collins&oldid= " categories: births deaths accidental deaths in kentucky american cavers american explorers burials in kentucky caving incidents and rescues people from logan county, kentucky hidden categories: cs : julian–gregorian uncertainty articles with hcards articles needing additional references from september all articles needing additional references articles needing additional references from december commons category link from wikidata wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with fast identifiers wikipedia articles with snac-id identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية polski Српски / srpski suomi edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement the hollow men - wikipedia the hollow men from wikipedia, the free encyclopedia jump to navigation jump to search the hollow men by t. s. eliot eliot in written country england language english publisher faber & faber publication date lines quote this is the way the world ends this is the way the world ends this is the way the world ends not with a bang but a whimper.[ ] "the hollow men" ( ) is a poem by the modernist writer, t. s. eliot. like much of his work, its themes are overlapping and fragmentary, concerned with post–world war i europe under the treaty of versailles (which eliot despised: compare "gerontion"), hopelessness, religious conversion, redemption and, some critics argue, his failing marriage with vivienne haigh-wood eliot.[ ] it was published two years before eliot converted to anglicanism.[ ] divided into five parts, the poem is lines long. eliot's new york times obituary in identified the final four as "probably the most quoted lines of any th-century poet writing in english".[ ] contents theme and context publication information influence in culture . film . literature . multimedia . video games . music . television see also references external links theme and context[edit] eliot wrote that he produced the title "the hollow men" by combining the titles of the romance "the hollow land" by william morris with the poem "the broken men" by rudyard kipling:[ ], but it is possible that this is one of eliot's many constructed allusions. the title could also be theorized to originate more transparently from shakespeare's julius caesar or from the character kurtz in joseph conrad's heart of darkness who is referred to as a "hollow sham" and "hollow at the core". the latter is more likely since kurtz is mentioned specifically in one of the two epigraphs. the two epigraphs to the poem, "mistah kurtz – he dead" and "a penny for the old guy", are allusions to conrad's character and to guy fawkes. fawkes attempted arson of the english houses of parliament in and his straw-man effigy is burned each year in the united kingdom on guy fawkes night, the th of november.[ ] certain quotes from the poem such as "[...] heads filled with straw [...]" and "[...] in our dry cellar [...]"[ ] seem to be direct references to the gun powder plot. the hollow men follows the otherworldly journey of the spiritually dead. these "hollow men" have the realization, humility, and acknowledgement of their guilt and their status as broken, lost souls. their shame is seen in lines like "[...] eyes i dare not meet in dreams [...]" calling themselves "[...] sightless, useless [...]" and that that "[...] [death is] the only hope of empty men [...]".[ ] the "hollow men" fail to transform their motions into actions, conception to creation, desire to fulfillment. this awareness of the split between thought and action coupled with their awareness of "death's various kingdoms" and acute diagnosis of their hollowness, makes it hard for them to go forward and break through their spiritual sterility.[ ] eliot invokes imagery from the inferno (dante), specifically the third and fourth cantos of the inferno which describes limbo, the first circle of hell – showing man in his inability to cross into hell itself or to even beg redemption, unable to speak with god. he states that the hollow men "[...] grope together and avoid speech, gathered on this beach of the tumid river [...]",[ ] and dante states that at the gates of hell, people who did neither good nor evil in their lives have to gather quietly by a river where charon cannot ferry them across.[ ] this is the punishment for those in limbo according to dante, people who "[...] lived without infamy or praise [...]"[ ] they did not put any good or evil into the world, making them out to be 'hollow' people who can only watch others move on into the afterlife. eliot reprises this moment in his poem as the hollow men watch "[...] those who have crossed with direct eyes, to death's other kingdom [...]"[ ].eliot describes how they wish to be seen "[...] not as lost/violent souls, but only/as the hollow men/the stuffed men [...]".[ ] as the poem enters section five, there is a complete breakdown of language. the lord's prayer and what appears to be a lyric change of "pop goes the weasel" are written while this devolution of style ends with the final stanza, maybe the most quoted of eliot's poetry: this is the way the world ends this is the way the world ends this is the way the world ends not with a bang but a whimper.[ ] when asked in if he would write these lines again, eliot said he would not. according to henry hewes: "one reason is that while the association of the h-bomb is irrelevant to it, it would today come to everyone's mind. another is that he is not sure the world will end with either. people whose houses were bombed have told him they don't remember hearing anything."[ ] publication information[edit] the poem was first published as now known on november , in eliot's poems: – . [ ] eliot was known to collect poems and fragments of poems to produce new works. this is clear to see in his poems the hollow men and "ash-wednesday" where he incorporated previously published poems to become sections of a larger work. in the case of the hollow men four of the five sections of the poem were previously published: "poème", published in the winter edition of commerce (with a french translation), became part i of the hollow men[ ]. doris's dream songs in the november issue of chapbook had the three poems: "eyes that last i saw in tears", "the wind sprang up at four o'clock", and "this is the dead land." the third poem became part iii of the hollow men.[ ] three eliot poems appeared in the january issue of his criterion magazine: "eyes i dare not meet in dreams", "eyes that i last saw in tears", and "the eyes are not here". the first poem became part ii of the hollow men and the third became part iv[ ]. additionally, the march of dial published the hollow men, i-iii which was finally transformed to the hollow men parts i, ii, and iv in poems: – .[ ] influence in culture[edit] this section appears to contain trivial, minor, or unrelated references to popular culture. please reorganize this content to explain the subject's impact on popular culture, providing citations to reliable, secondary sources, rather than simply listing appearances. unsourced material may be challenged and removed. (may ) this section needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. (may ) (learn how and when to remove this template message) the hollow men has had a profound effect on the anglo-american cultural lexicon—and by a relatively recent extension, world culture—since it was published in . one source states that the last four lines of the poem are "probably the most quoted lines of any th-century poet writing in english."[ ][ ] the sheer variety of references moves some of the questions concerning the poem's significance outside the traditional domain of literary criticism and into the much broader category of cultural studies. examples of such influences include: film[edit] eliot's poem was a strong influence on francis ford coppola and the movie apocalypse now ( ), in which antagonist colonel kurtz (played by marlon brando) is depicted reading parts of the poem aloud to his followers. furthermore, the complete dossier dvd release of the film includes a -minute special feature of kurtz reciting the poem in its entirety. the poem's epigraph, "mistah kurtz – he dead", is a quotation from conrad's heart of darkness ( ), upon which the film is loosely based. the final stanza is printed one line at a time at the beginning of the television production of stephen king's the stand ( ). the poem is also referenced in part by the character who feels responsible for the deadly "captain trips" virus being unleashed. *this is incorrect, the poem referenced is the second coming by w.b. yeats ("and what rough beast ...")* the trailer for the film southland tales ( ), directed by richard kelly, plays on the poem, stating: "this is the way the world ends, not with a whimper but with a bang." the film also quotes this inverted version of the line a number of times, mostly in voice-overs.[ ] beverly weston discusses the line "life is very long" at the beginning of august: osage county. in george cukor's classic remake of a star is born ( ), jack carson's film studio pr man matt libby refers to the death of james mason's character norman maine dismissively, quoting: "this is the way the world ends: not with a bang - with a whimper." in indiana jones and the kingdom of the crystal skull, the character harold oxley (john hurt) paraphrases a line from eliot's poem "eyes that i last saw in tears," when he says, "through eyes that last i saw in tears, here in death's dream kingdom, the golden vision reappears." this poem was part of a series of poems published by eliot in and , which referenced either "death's dream kingdom," "death's other kingdom," or "death's twilight kingdom," of which several would later be published as parts i-iv of "the hollow men." "eyes that i last saw in tears," though ultimately not included by eliot in his published version of "the hollow men," may nevertheless be read alongside the published poem, as a sort of apocryphal text, to provide context regarding the development of the composition of "the hollow men." furthermore, the poem fits neatly within the series of poems which were ultimately collected and published as "the hollow men," due to its imagery of "eyes" being encountered by the author in relation to "death's dream kingdom" and "death's other kingdom," as in parts i, ii, and iv of "the hollow men." sator (kenneth branagh) says that 'the world will end not with a bang, but a whimper' in reference to his dead man's switch device in tenet. literature[edit] stephen king's the dark tower series contains multiple references to "the hollow men" (as well as the waste land, most prominently the dark tower iii: the waste lands ( )).[ ] king also makes reference to "the hollow men" in pet sematary ( ) with: "or maybe someone who had escaped from eliot's poem about the hollow men. i should have been a pair of ragged claws", the latter sentence of which is taken from eliot's poem "the love song of j. alfred prufrock" ( ). there is also a reference in "under the dome" ( ), made by the journalist julia shumway. theodore dalrymple's book, not with a bang but a whimper ( ), takes its title from the last part of the poem.[ ] nevil shute's novel, on the beach ( ), takes its name from the second stanza of part iv of the poem and extracts from the poem, including the passage in which the novel's title appears, have been printed in the front papers of some editions of the book including the first us edition.[ ] kami garcia's novel beautiful creatures contains this quote. karen marie moning's fever series novel, shadowfever, contains quotes from this poem. tracy letts's play august: osage county ends with johanna quoting, "this is how the world ends, not with a bang but with a whimper" repeatedly. the prologue of n.k. jemisin's novel the fifth season references the last four lines of this poem. v. e. schwab's monsters of verity duology ( , ) quotes the line "not with a bang but with a whimper" throughout the novels. attia hosain's sunlight on a broken column borrows its title from eliot's the hollow men karl marlantes's novel matterhorn ( ) of the vietnam war weaves a quote from eliot's the hollow men into the story: "between the emotion and the response falls the shadow." haruki murakami's novel kafka on the shore ( ) mentions the poem as oshima explain he's most afraid of people lacking imagination, "the kind t.s eliot called hollow men". multimedia[edit] chris marker created a -minute multimedia piece for the museum of modern art in new york city titled owls at noon prelude: the hollow men ( ), which was influenced by eliot's poem.[ ] video games[edit] in the video game shadow man, the final boss, legion, quotes "this is the way the world ends, michael, not with a bang but with a whimper" just before giving the final blow to the protagonist in the bad ending. the full poem is included in one of the flashbacks in super columbine massacre rpg! in the video game metal gear solid : sons of liberty, the ai colonel in the final speech quotes "and this is the way the world ends. not with a bang, but a whimper".[ ] music[edit] denis apivor wrote a work called the hollow men for baritone, male chorus and orchestra around . it had only one performance, in , under the conductor constant lambert, and produced by the bbc through the influence of edward clark.[ ] the american new wave band devo partially lifted from the poem for the chorus to the song "the shadow" on their seventh album total devo ( ) australian rock band tism spoofed the poem's first epigraph for their song "mistah eilot - he wanker" which mocked eilot and the concept of modernism. the song "hollow man" appears as the first track on the album doppelgänger ( ) by the group daniel amos; the song is a paraphrase of eliot's poem spoken over the music of "ghost of the heart" played backwards; "ghost of the heart" is the last song on the group's previous album ¡alarma! ( ) members of the minneapolis hiphop collective doomtree have referenced this poem in a number of collective and individual songs, including the single "no homeowners" on the collective's album false hopes, ". airweight" on 's all hands, and in the single "flash paper" from member sims's album more than ever. the last line of the poem is referenced in amanda palmer's song "strength through music", based on the columbine high school massacre. eliot's poem inspired the hollow men ( ), a piece for trumpet and string orchestra by composer vincent persichetti and one of his most popular works.[ ] jon foreman of switchfoot says that the band's song "meant to live" is inspired by eliot's poem.[ ] the gaslamp killer song in the dark... has the lyrics "this is the way the world ends" in it. the greek black metal band rotting christ uses excerpts of the poem as lyrics for the song "thine is the kingdom" on their album sleep of the angels. the final stanza of the poem is used as the song's chorus. finnish musical producer axel thesleff created a musical interpretation of the poem in form of a five-track lp.[ ] british s band emf used the line "this is the way the world ends, this is the way the world ends, this is the way the world ends" using wordplay to add "not with a band" in their song longtime on the album schubert dip[ ] the release by north carolina-based band scapegoat, i am alien, references the poem in the title track: "so thank you mr. elliot for showing us the way the world will end, not with a bang, but with a whimper on the wind, and i've come to tell all of my friends, that it starts with a bang, but ends with the wind." frank turner references the hollow men in his song "sons of liberty". his song "anymore" also contains a reference to the last line of the poem. in , american experimental hip hop musician lil ugly mane repeatedly sampled the line “this is the way the world ends” in the song ”columns”, from his sophomore album oblivion access. wire's song "comet" (from the album send) features a chorus that references the poem: "and the chorus goes ba-ba-ba-ba-bang....then a whimper". television[edit] not with a bang was a short-lived british television sitcom produced by lwt for itv in . british television series: doctor who ("the lazarus experiment") and foyle's war (series , episode , which also name-drops "ash wednesday" and horizons magazine, to which eliot contributed) american television shows: rock, the big bang theory, northern exposure, the title of the finale of dexter's sixth season (in which the protagonist also quotes the titular verse), dollhouse (on "the hollow men" episode), frasier, mad men, and the x-files (on the "pusher" episode). "the haunting hour" ("scarecrow") "the stand" (the plague), the odd couple british television series the fall references the poem in its title and in series episode serial killer paul spector writes the lines 'between the idea and the reality, between the motion and the act falls the shadow' in his journal of his murders. american series agents of s.h.i.e.l.d., in the finale of season has holden radcliffe partially quote the final stanza as he disappears.[ ] american crime drama the sinner has the character nick recite the lines "prickly pear, prickly pear, prickly pear" from part v. see also[edit] gunpowder plot in popular culture references[edit] ^ a b eliot, t. s. ( ) [ ]. poems – . london: faber & faber, . ^ a b c d e f g see, for instance, the work of one of eliot's editors and major critics, ronald schuchard. ^ swarbrick, andrew ( ). selected poems of t. s. eliot. basingstoke and london: macmillan, . ^ a b "t.s. eliot, the poet, is dead in london at ". the new york times. january . retrieved december . ^ eliot, t. s. inventions of the march hare: poems – (harcourt, ) pp. isbn  - - - christopher ricks, the editor, cited a letter dated january to the times literary supplement. ^ "gunpowder plot | definition, summary, & facts". encyclopedia britannica. retrieved march . ^ a b "dante's inferno". www.gutenberg.org. retrieved march . ^ 't. s. eliot at seventy, and an interview with eliot' in saturday review. henry hewes. september in grant p. . ^ a b c d e gallup, donald clifford ( ). t. s. eliot: a bibliography. internet archive. london, faber. isbn  - - - - . ^ murphy, russell elliott ( ). critical companion to t.s. eliot : a literary reference to his life and work. new york, ny: facts on file. p.  . isbn  - . ^ dargis, manohla ( november ). "southland tales". the new york times. ^ anon. "t. s .eliot: timeless influence on a modern generation". brushed with mystery. retrieved december . ^ not with a bang but a whimper: the politics and culture of decline (us edition) ( ) isbn  - - - ^ shute, nevil ( ). on the beach. ny, ny: william morrow and company. ^ "chris marker's short film: owls at noon, prelude: the hollow men". moma.org. . ^ "metal gear solid ending analysis". junkerhq.net. retrieved february . ^ david wright, denis apivor ^ spector, irwin ( may ). "on stage at k.u." lawrence journal world. retrieved december . ^ "switchfoot - behind the songs of the beautiful letdown". retrieved march . ^ "the hollow men lp by axel thesleff". soundcloud.com. retrieved june . ^ "lyrics to emf longtime". metrolyrics.com. retrieved june . ^ freeman, molly ( may ). "agents of s.h.i.e.l.d. season finale review & discussion". screen rant. retrieved may . external links[edit] an omnibus collection of t. s. eliot's poetry at standard ebooks text of the poem with notes scans of the publication of the poem, in a reprint v t e t. s. eliot bibliography early poems "the love song of j. alfred prufrock" "portrait of a lady" "preludes" "whispers of immortality" "gerontion" the waste land the hollow men ash wednesday ariel poems journey of the magi a song for simeon later poems old possum's book of practical cats "the awefull battle of the pekes and the pollicles" "gus: the theatre cat" "growltiger's last stand" "the naming of cats" burnt norton east coker the dry salvages little gidding four quartets plays sweeney agonistes the rock murder in the cathedral the family reunion the cocktail party the confidential clerk the elder statesman prose selected essays, - "hamlet and his problems" "tradition and the individual talent" the sacred wood a choice of kipling's verse ( ) "the frontiers of criticism" adaptations assassinio nella cattedrale (opera) cats musical film film publishing the criterion faber and faber t. s. eliot prize t. s. eliot prize (truman state university) related tom & viv ( play, film) people eliot family vivienne haigh-wood eliot (first wife) valerie eliot (second wife) henry ware eliot (father) charlotte champe stearns (mother) william greenleaf eliot (grandfather) e. martin browne emily hale john davy hayward ezra pound jean jules verdenal william butler yeats authority control musicbrainz work retrieved from "https://en.wikipedia.org/w/index.php?title=the_hollow_men&oldid= " categories: poems poetry by t. s. eliot world war i poems modernist poems american poems hidden categories: engvarb from may use dmy dates from may articles with trivia sections from may articles needing additional references from may all articles needing additional references wikipedia articles with musicbrainz work identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages العربية deutsch français italiano nederlands tiếng việt edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement gene stratton-porter - wikipedia gene stratton-porter from wikipedia, the free encyclopedia jump to navigation jump to search gene stratton-porter born ( - - )august , lagro, indiana died december , ( - - ) (aged  ) los angeles, california occupation author naturalist nature photographer film producer nationality american period – genre natural history gene stratton-porter (august , – december , ), born geneva grace stratton, was a wabash county, indiana, native who became a self-trained american author, nature photographer, and naturalist. in stratton-porter used her position and influence as a popular, well-known author to urge legislative support for the conservation of limberlost swamp and other wetlands in the state of indiana. she was also a silent film-era producer who founded her own production company, gene stratton porter productions, in . stratton-porter wrote several best-selling novels in addition to columns for national magazines, such as mccall's and good housekeeping, among others. her novels have been translated into more than twenty languages, including braille, and at their peak in the s attracted an estimated million readers. eight of her novels, including a girl of the limberlost, were adapted into moving pictures. stratton-porter was also the subject of a one-woman play, a song of the wilderness. two of her former homes in indiana are state historic sites, the limberlost state historical site in geneva and the gene stratton-porter state historic site on sylvan lake, near rome city, indiana. contents early life and education marriage and family major residences . limberlost cabin (geneva, indiana) . cabin at wildflower woods . california homes career . author . . early years . . novels . . nature books . . magazine articles . . children's stories and poetry . nature photographer . naturalist and conservationist . movie producer later years death and legacy honors and awards selected published works . novels . nature studies . poetry . children's books and collected essays film adaptations of novels biographical play references external links early life and education[edit] geneva grace stratton, the twelfth and last child of mary (shallenberger) and mark stratton, was born at the family's hopewell farm on august , , near lagro in wabash county, indiana.[ ][ ] mark stratton, a methodist minister and farmer of english descent, and mary stratton, a homemaker of german-swiss ancestry,[ ] were married in ohio on december , , relocated to wabash county, indiana, in , and settled at hopewell farm in . geneva's eleven siblings included catherine, mary ann, anastasia, florence, ada, jerome, irvin, leander, and lemon, in addition to two sisters, samira and louisa jane, who died at a young age. geneva's married sister, mary ann, died in an accident in february ; her teenaged brother, leander, whom geneva called laddie, drowned in the wabash river on july , .[ ] in twelve-year-old geneva moved to wabash, indiana, with her parents and three unmarried siblings. they initially lived in the home of geneva's married sister, anastasia, and her husband, alvah taylor, a lawyer.[ ] geneva's mother died on february , , less than four months after the move to wabash. thereafter, geneva boarded with various relatives in wabash until her marriage to charles porter in . geneva, who was also called geneve during her youth, shortened her name to gene during her courtship with porter.[ ] one of stratton-porter's early nature photographs of owls, one of her favorite birds to study and photograph. gene received little formal schooling early in life; however, she developed a strong interest in nature, especially birds. as a young girl, gene's father and her brother, leander, taught her to appreciate nature as she roamed freely around the family farm, observing animals in their natural habitats and caring for various pets.[ ] it was said of stratton-porter's childhood that she had been "reared by people who constantly pointed out every natural beauty, using it wherever possible to drive home a precept, the child [stratton-porter] lived out-of-doors with the wild almost entirely."[ ] after the family moved to wabash in , gene attended school on a regular basis and became an avid reader. she also began music lessons in banjo, violin, and piano from her sister, florence, and received private art lessons from a local instructor. gene finished all but the final term of her senior year at wabash high school. because she was failing her classes, she made the decision on her own to quit, later claiming that she had left school to care for anastasia, who was terminally ill with cancer and receiving treatment in illinois.[ ] marriage and family[edit] in thirty-four-year-old charles dorwin porter saw gene stratton during her trip to sylvan lake, indiana, where she was attending the island park assembly, a chautauqua gathering. porter, a druggist, was thirteen years older than stratton, who was not yet twenty-one.[ ] after ten months of regularly exchanging letters, the couple met at another gathering at sylvan lake, during the summer of . they became engaged in october and were married on august , . gene stratton-porter kept her family surname and added her husband's after her marriage.[ ] charles porter, who had numerous business interests, became a wealthy and successful businessman. of scots-irish descent, he was the son and oldest child of elizabeth and john p. porter, a doctor. charles owned an interest in a drugstore in fort wayne, indiana, which he sold soon after his marriage, and also owned drugstores in decatur and geneva. he also owned and operated farms, a hotel, and a restaurant. porter and other investors organized the bank of geneva in . he also became a trenton oil company investor. at one time he had more than sixty oil wells drilled on his land.[ ][ ] gene and charles porter's only child, a daughter, named jeannette, was born on august , , when the porters were living in decatur, indiana. the family moved to geneva, in adams county, indiana, in . charles pursued various business interests and traveled extensively, while gene stayed at home.[ ] gene took pride in her family and maintaining a home, but she opposed the restrictive, traditional marriages of her era and grew bored and restless. she maintained her independence through the pursuit of her lifelong interests in nature and birdlife, and began by writing about these subjects to earn her own income. in time, she became an independently wealthy novelist, nonfiction writer, and film producer.[ ] stratton-porter had four grandchildren, two granddaughters, and two grandsons. the porters' daughter, jeannette, married g. blaine monroe in and had two daughters: jeannette helen monroe was born on november , ; gene stratton monroe was born on march , . the monroes divorced in , and then jeannette and her two daughters moved to los angeles, california, to live with stratton-porter, who had moved there in . on june , , jeannette married james leo meehan, a film producer, who was stratton-porter's business associate. after the death of her brother, lemon stratton, in late , stratton-porter became the guardian of his daughter, leah mary stratton. leah lived with stratton-porter for several years after leah's father's death.[ ] major residences[edit] in stratton-porter persuaded her husband, charles, to move their family from decatur to geneva in adams county, indiana, where he would be closer to his businesses. he initially purchased a small home within walking distance of his drugstore;[ ] however, when oil was discovered on his land, it provided the financial resources needed to build a larger home.[ ] the limberlost cabin at geneva served as stratton-porter's home from to .[ ][ ] in , with the profits she made from her best-selling novels and successful writing career, stratton-porter purchased property along sylvan lake, near rome city in noble county, indiana, and built the cabin at wildflower woods estate, which eventually encompassed acres ( hectares). both of these properties are preserved as state historic sites.[ ] stratton-porter moved to southern california in and made it her year-round residence. she purchased homes in hollywood and built a vacation home that she named singing water on her property on catalina island. floraves, her lavish mountaintop estate in bel air, was nearly completed at the time of her death in , but she never lived in it.[ ] limberlost cabin (geneva, indiana)[edit] main article: gene stratton porter cabin (geneva, indiana) limberlost state historic site, western side construction on a two-story, -room, cedar-log queen anne style rustic home in geneva began in and was completed in . the porters named their new home the limberlost cabin in reference to its location near the , -acre ( , -hectare) limberlost swamp, where stratton-porter liked to explore and found the inspiration for her writing. stratton-porter lived in the cabin until .[ ][ ] while residing in geneva, stratton-porter spent much time exploring, observing nature, sketching, and making photographs at the limberlost swamp. she also began writing nature stories and books. the nearby swamp was the setting for two of her most popular novels, freckles ( ) and a girl of the limberlost ( ). in addition, the swamp was the locale for many of her works of natural history. stratton-porter became known as "the bird lady" and "the lady of the limberlost" to friends and readers.[ ][ ] between and , the area's wetlands around stratton-porter's home were drained to reclaim the land for agricultural development and the limberlost swamp, along with the flora and fauna that stratton-porter documented in her books, was destroyed. in she purchased property for a new home at sylvan lake in noble county, indiana. the porters sold the limberlost cabin in .[ ] in the limberlost conservation association of geneva donated it to the state of indiana. designated as the limberlost state historic site, the indiana state museum and historic sites operates the site as a house museum. it was listed in the national register of historic places in .[ ][ ][ ] cabin at wildflower woods[edit] main article: gene stratton-porter cabin (rome city, indiana) gene's cabin at wildflower woods is the present-day gene stratton-porter state historic site on sylvan lake in rome city, noble county, indiana. after the limberlost swamp was drained and its natural resources developed for commercial purposes, stratton-porter sought alternate locations for inspiration. she initially purchased a small home on the north side of sylvan lake, near rome city, in noble county, indiana, as a summer home while she looked for property to build a new residence. in she purchased lakeside property using her own funds and designed and had a new home built there in . stratton-porter named her new home the cabin at wildflower woods, which she also called limberlost cabin because of its similarity to the porters' home in geneva.[ ] while her sylvan lake home was under construction, stratton-porter found time to write laddie ( ), her sixth novel. she moved into the large, two-story, cedar-log cabin in february ; her husband, charles, who remained at their home in geneva, commuted to the lakeside property on weekends.[ ] stratton-porter assisted in developing the grounds of wildflower woods into her private wildlife sanctuary. its natural setting provided her with the privacy she desired, at least initially; however, her fame attracted too many unwanted visitors and trespassers. the property's increasing lack of privacy was one of the reasons that caused her move to california in . stratton-porter offered to sell her property to the state of indiana in to establish a state nature preserve, but representatives of the state government did not respond. she retained ownership of wildflower woods for the remainder of her life.[ ] scenes from a movie based on stratton-porter's book, the harvester, were filmed there in .[ ] in the gene stratton-porter association purchased wildflower woods from stratton-porter's daughter, jeannette porter meehan; in the association donated the -acre ( . -hectare) property to the state of indiana, including the cabin, its formal gardens, orchard, and pond. designated as the gene stratton-porter state historic site, the present-day -acre ( -hectare) property, including acres ( . hectares) that were part of her original purchase, is operated by the indiana state museum and historic sites and open to the public. the property was listed on the national register of historic places in .[ ][ ] in addition to the cabin, guests can explore a one-acre formal garden, wooded paths, and a -acre ( -hectare) wetland and prairie that is undergoing restoration.[ ] the gene stratton-porter state historic site is supported by the gene stratton-porter memorial society, inc. california homes[edit] lack of privacy at her home on sylvan lake in indiana was among the reasons for stratton-porter's move to california. she arrived in southern california in the fall of , intending to spend the winter months there, but enjoyed it so much that she decided to make it her year-round home. stratton-porter enjoyed an active social life in the los angeles area, made new friends, began to publish her poetry, and continued to write novels and magazine articles. in she also established her own film production company.[ ][ ] stratton-porter initially purchased a small home between second and third streets in hollywood, not far from where her stratton relatives lived. (stratton-porter's sister, catherine, and two of stratton-porter's nieces were already living in southern california when she moved there. her brother, jerome, and his wife later retired nearby.) in , when stratton-porter's recently divorced daughter, jeannette, and stratton-porter's two granddaughters relocated to california to live with her, she purchased a larger home at the corner of serrano and fourth street, while charles remained at geneva, still active in the town's bank. after the porters sold the limberlost cabin in , he stayed at a geneva boardinghouse when he was not traveling.[ ] in early stratton-porter purchased two lots on catalina island to build a -room vacation retreat. the grounds of the -acre ( . -hectare) property included a fountain constructed of local stone and seashells. stratton-porter moved into the wildlife haven in june and named it singing water because of the sounds emitting from the elaborate fountain. she completed her last novel, the keeper of the bees ( ) at catalina island in .[ ] by march stratton-porter had selected a site for an estate home in southern california in an undeveloped area west of present-day beverly hills that became bel air. stratton-porter was the first to build a residence there. the -room, english tudor-style mansion included approximately , square feet ( ,  m ) of living space and was set on a small mountaintop. the property also included a -car garage with servants' quarters above it, a greenhouse, outdoor ponds, and a tennis court. stratton-porter named her estate floraves for flora (meaning flowers) and aves (meaning birds). she died on december , , a few weeks before the home was completed. her daughter, jeannette, was the sole heir of her mother's estate.[ ] career[edit] gene stratton-porter while her marriage to charles porter provided financial security and personal independence, gene sought additional roles beyond those of wife and mother. she took up writing in as an outlet for self-expression and as a means to earn her own income. stratton-porter felt that as long as her work did not interfere with the needs of her family, she was free to pursue her own interests. she began her literary career by observing and writing about birdlife of the upper wabash river valley and the nature she had seen during visits to the limberlost swamp, less than a mile from her home in geneva, indiana. the limberlost swamp, the limberlost cabin at geneva, and after , the cabin at wildflower woods at sylvan lake in northeastern indiana became the laboratories for her nature studies and the inspiration for her short stories, novels, essays, photography, and movies.[ ] stratton-porter wrote twenty-six books that included twelve novels, eight nature studies, two books of poetry, and four collections of stories and children's books. of the fifty-five books selling one million or more copies between and , five of them were novels written by stratton-porter. among stratton-porter's best-selling novels were freckles ( ), a girl of the limberlost ( ), the harvester ( ), laddie ( ), and michael o'halloran ( ).[ ][ ] stratton-porter incorporated every day occurrences and acquaintances into her works of fiction. many of her works delve into difficult subject matter such as themes of abuse, prostitution, and abandonment. in the case of her father's daughter ( ), the anti-asian sentiment that her writing reflected was prevalent in the united states during that era. her other writing also introduced the concept of land and wildlife conservation to her readers.[ ] although stratton-porter preferred to focus on nature books, it was her romantic novels that gained her fame and wealth. although, she often did create an irrefutable link between nature and romance in her plotlines; nature often represents a comfort for her characters, as she felt it was for her as a child. these romantic novels generated the income that allowed her to pursue her nature studies. her novels have been translated into twenty-three languages, as well as braille. at its peak in the early s, her readership was estimated at million, with earnings from her literary works estimated at $ million.[ ][ ] author[edit] early years[edit] stratton-porter began her career in , when she sent nature photographs that she had made to recreation magazine. her first published article, "a new experience in millinery," appeared in the publication's february issue. the article described her concerns about harming birds in order to use their feathers as hat trims. at the magazine's request, stratton-porter also wrote a photography column called "camera notes." in july she switched to doing similar work for outing, a natural history magazine.[ ][ ] stratton-porter was soon submitting short stories and nature-related material to magazines on a regular basis with increasing success. her first short story, "laddie, the princess, and the pie," was published in metropolitan magazine in september .[ ] to attract a wider audience stratton porter decided to include fictional elements in her nature writing and began writing novels. stratton-porter's writing also included poetry and children's stories, in addition to essays and editorials that were published in magazines with nationwide circulation such as mccall's and good housekeeping.[ ] novels[edit] although it was published anonymously in , circumstantial evidence suggests that stratton-porter's first book was the strike at shane's. however, stratton-porter never acknowledged that she had written it and its author was never revealed.[ ] bobbs-merrill published her first, full-length attributed novel, the song of the cardinal ( ), about a red bird living along the wabash river. the book explained how birds lived in the wild and also included her photographs. although the novel was a modest commercial success and was warmly received by literary critics, stratton-porter's publisher believed that nature stories would not become as popular as romance novels. for her second novel, stratton-porter decided to combine nature and romance. freckles ( ), which was published by doubleday, page and company, became a bestseller. the book's popularity among readers helped to launch her career as a successful novelist, despite its lackluster reviews from critics.[ ] title page a girl of the limberlost ( ), which was highly successful and her best-known work, brought her worldwide recognition. its central character, elnora comstock, a lonely, poverty-stricken girl living on a farm in adams county, goes to the limberlost swamp to escape from her troubles and earns money to pay for her education by collecting and selling moth specimens.[ ] the main character's strong, individualistic nature are similar to stratton-porter's.[ ] literary critics called the novel a "well written" and "wholesome story."[ ] initial sales of her third novel, at the foot of the rainbow ( ), about two friends who enjoy fishing and trapping, were "disappointing,"[ ] but stratton-porter reached the peak of her popularity with the publication of her next novel, the harvester ( ), which centers around david langston, who harvests and in turn sells medicinal herbs, and his love interest, ruth jameson, who embodies his ideal partner. it reached number one on the best-seller list in .[ ] freckles ( ), a girl of the limberlost ( ), and the harvester ( ) are set in the wooded wetlands and swamps of northeast indiana. stratton-porter loved the area and its wildlife and had documented them extensively.[ ] inexpensive reprints of freckles and a girl of the limberlost brought stratton-porter to the public's attention in the united states as well as abroad. translations of her book into other languages also increased her international audience. in , when stratton-porter reached a long-term agreement with doubleday, page and company to publish her books, she agreed to provide one manuscript each year, alternating between novels and nonfiction nature books.[ ] stratton-porter's next novel, laddie: a true blue story ( ), another of her best-selling novels, included elements that corresponded to her early life. it was written while she supervised construction of her home at sylvan lake in noble county, indiana, and she described it as her most autobiographical novel. the narrative is told in the first person by the twelfth child of the "stanton" family. the title character is modeled after stratton-porter's deceased older brother, leander, whom stratton-porter nicknamed laddie. as in stratton-porter's own family, laddie is connected with the land and identifies with their father's vocation of farming.[ ][ ] michael o'halloran ( ), her seventh novel, was inspired by a newsboy she had encountered in philadelphia, while visiting her daughter, jeannette, and her family. a daughter of the land ( ), her next novel, did not sell as well as her earlier works.[ ] over time, sales of stratton-porter's novels had slowly declined and by her status as a best-selling author began to fade. undeterred, she continued to write until her death in .[ ] her father's daughter ( ), one of stratton-porter's last novels, was set in southern california, outside los angeles, where she had moved around . the novel is especially biased against immigrants of asian descent. judith reick long, one of stratton-porter's biographers, stated that world war i-era racial prejudice and nativism were prevalent in the united states and it was not unusual to be anti-asian in southern california at that time. barbara olenyik morrow, another of her biographers, explained that the book was intentionally playing to the era's ethnic prejudices. the literary review, ignoring its anti-asian content, noted its "wholesome charm."[ ][ ] the white flag ( ), criticized as an old-fashioned melodrama, failed to make the bestseller list; however, the story was serialized in good housekeeping magazine beginning in , in advance of the book's release. by the time of its publication, stratton-porter's interests had shifted toward filmmaking.[ ] the keeper of the bees ( ) and the magic garden ( ) were the last of stratton-porter's novels completed before her death. both of them were written at her home on catalina island and published posthumously. the keeper of the bees is a story about a world war i veteran who regains his heath through the restorative "power and beauty of nature."[ ] the story was serialized in mccall's magazine from february through september and was published in book form later that year. the magic garden, about a girl of divorced parents, was written for her two granddaughters, whose parents divorced when they were young. filmmaker james leo meehan, stratton-porter's business partner and son-in-law, wrote a screenplay of the novel shortly after stratton-porter had completed the manuscript.[ ] millions of copies of stratton-porter's novels were sold and most of them became best sellers, but the literary establishment criticized them as "unrealistic," "too virtuous," and "idealistic."[ ] despite the criticisms, she was popular among readers of her novels.[ ] stratton-porter once claimed, "time, the hearts of my readers, and the files of my publisher will find me my ultimate place."[ ] nature books[edit] front cover of moths of the limberlost' ( ) stratton-porter, a keen observer of nature, wrote eight nonfiction nature books that were moderate sellers compared to her novels. what i have done with birds ( ) first appeared as a six-month illustrated series for the ladies' home journal from april to august . the bobbs-merrill company published the material in book format that also includes stratton-porter's photographs.[ ] birds of the bible ( ), an illustrated reference book published by jennings and graham of cincinnati, included eighty-one of stratton-porter's photographs. both of these nature books were slow sellers. music of the wild ( ), also published by jennings and graham, warned of the adverse effects that the destruction of trees and swamps would have on rainfall. her warnings appeared nearly two decades before the dust bowl of the s and well in advance of present-day environmental concerns about climate change.[ ] moths of the limberlost ( ), the nature book of which stratton-porter was "most proud," was dedicated to neltje blanchan, a fellow nature writer and the wife of her publisher, frank nelson doubleday.[ ] prior to her move to california in , stratton-porter completed the manuscript for homing with the birds ( ). praised for its content, it described birdlife using easy-to-understand language for the general public. wings ( ) was published a year before her death; tales you won't believe ( ) was published posthumously.[ ] while literary critics called her novels overly sentimental, academics dismissed her nature writing because they felt that her research methods were unscientific. stratton-porter, who was not a trained scientist, centered her field research on her own interests in observing the domestic behavior of wild birds, such as their nest-building, diets, and social behavior. her writing tried to explain nature in ways that her readers could understand and avoided scientific jargon and tedious, dry statistics.[ ] magazine articles[edit] stratton-porter regularly contributed articles and photographs to magazines that included metropolitan, recreation, outing, country life in america, and ladies' home journal. after her move to california in , stratton-porter wrote articles for the izaak walton league's publication, outdoor america, and a thirteen-part series of nature articles for good housekeeping. she also agreed to write a series of editorials for mccall's magazine in a monthly column called the "gene stratton-porter's page," beginning in january .[ ] tales you won't believe ( ), a collection of articles that stratton-porter had written for good housekeeping, and let us highly resolve ( ), a collection of essays that had appeared in mccall's magazine, were published after her death.[ ] children's stories and poetry[edit] morning face ( ), a collection of children's stories that also included her photographs, was dedicated to her granddaughter, jeannette monroe, whom stratton-porter had nicknamed "morning face."[ ] "symbols," her first poem to appear in a national magazine, was published in good housekeeping in january . the fire bird ( ), a native american tragedy, was the first of her long narrative poems to be published in book form. its sales were weak and it was not well received by literary critics. in good housekeeping published stratton-porter's poem, "euphorbia," in three installments[ ] and paid her $ , , "the most she had ever received for her poetry."[ ] jesus of the emerald ( ), another of her long narrative poems, describes tiberius caesar's search for details of jesus's works and appearance. stratton-porter explains her religious beliefs in the afterword of the book.[ ] nature photographer[edit] in addition to writing, stratton-porter was an accomplished artist and wildlife photographer, specializing in the birds and moths that lived in the limberlost swamp, one of the last of the wetlands of the lower great lakes basin. she also made sketches of her observations as part of her fieldwork. stratton-porter was especially noted for her close-up photographs of wildlife in their natural habitat. in one of her early photographic studies, she documented the development of a black vulture over a period of three months.[ ] stratton-porter reported in what i have done with birds ( ) that the effort "yielded me the only complete series of vulture studies ever made."[ ] stratton-porter began photographing birds in the limberlost swamp and along the wabash river near her home in geneva, indiana, after her husband, charles, and daughter, jeannette, presented her with a camera as a christmas gift in . she submitted some of her early photographs to recreation magazine in the late s and wrote a regular camera column for the publication in . outing magazine hired her to do similar work in . unhappy with images the magazine editors suggested to accompany her writing, she began to submit her own photographs as illustrations for her articles. she also preferred to use her own photographs to illustrate her nature books.[ ] thirteen of her wildlife photographs were published in in the american annual of photography, which also included her views on her fieldwork. many of the photographs in music of the wild ( ) were taken at her sylvan lake home in northeastern indiana.[ ] stratton-porter preferred to photograph wildlife in their natural environment.[ ] although she hired men to help transport her cumbersome camera equipment into the field for photo shoots, she preferred to work alone. occasionally, her husband accompanied her into the field. as stratton-porter gained more experience, she acquired better camera equipment, including a custom-made camera that used eight-by-ten-inch glass photographic plates. stratton-porter believed that the larger plates provided her with more detailed photographs of her subjects. she also developed her photographic plates in a darkroom she set up in the bathroom at limberlost cabin, her family's home in geneva, indiana, and later in her darkroom at the cabin at wildflower woods along sylvan lake.[ ] naturalist and conservationist[edit] through her writing and photography, stratton-porter demonstrated "her strong desire to instill her love of nature in others in order to improve their lives and preserve the natural world."[ ] she also opposed the destruction of wetlands developed for commercial use. after the turn of the twentieth century, when the limberlost swamp's trees were cut for timber and its shrubs and vines were killed, the resulting commercial development, which included oil drilling, destroyed its wildlife. the swamp was drained into the wabash river.[ ] in stratton-porter became more active in the conservation movement when the indiana general assembly passed legislation to allow drainage of state-owned swamps in noble and lagrange counties. she joined with others to urge the state legislature to repeal the law that would lead to the destruction of wetlands in northeastern indiana. although the law was repealed in , the area's swamps were eventually drained.[ ][ ] in stratton-porter became a founding member of the izaak walton league, a national conservation group, and joined its efforts to save the wild elk at jackson hole, wyoming, from extinction. stratton-porter called on the readers of outdoor america, the league's publication, to take prompt action.[ ] she was also a strong advocate of land and wetland conservation. as she wrote in an essay, "all together, heave," for outdoor america in , "if we do not want our land to dry up and blow away, we must replace at least part of our trees" and urged conservation of american waterways.[ ] movie producer[edit] stratton-porter, a "pioneer" in the hollywood film industry, was dissatisfied with the movie adaptation of her novels by movie studios. because she wanted more control over the production work, stratton-porter expanded her business ventures to include her own production studio to make moving pictures based on her novels. eight of her novels have been made into movies.[ ] paramount pictures produced freckles, the first film based on her novels in , but stratton-porter was unhappy with the movie because it did not closely follow her novel and decided to make her own.[ ] stratton-porter's first filmmaking effort was made with thomas h. ince on michael o'halloran ( ). stratton-porter supervised the filming and assisted the principal director, james leo meehan. her daughter, jeannette, wrote the screenplay.[ ] in stratton-porter formed her own movie studio and production company. gene stratton-porter productions created moving pictures that were closely based on her novels.[ ] before her death in december , stratton-porter's production company had produced two films, michael o'halloran ( ) and a girl of the limberlost ( ), and she had completed her novel the keeper of the bees for a third film.[ ] stratton-porter's studio filmed the harvester ( ) at her wildflower woods estate in northeastern indiana. film booking offices of america released the movies produced by stratton-porter's studio. none of these fbo-released films are known to survive.[ ][ ] stratton-porter's stories remained popular among filmmakers after her death. rko pictures, a successor to film booking offices, made freckles and laddie in . monogram pictures made a girl of the limberlost ( ), keeper of the bees ( ), and romance of the limberlost ( ).[ ] republic pictures released the harvester ( ) and michael o'halloran ( ). the original negatives and mm prints of these early films are unlikely to have survived; however, some mm versions created for television have been acquired by private collectors.[ ] a girl of the limberlost was adapted four times for film. first, as a silent film produced by stratton-porter's production company in with gloria grey in the title role. the version was directed by w. christy cabanne and its cast included marian marsh in the starring role and silent-era film stars henry b. walthall, betty blythe, and louise dresser, an indiana native. the version included ruth nelson. the , made-for-television movie starred joanna cassidy as stratton-porter.[ ] romance of the limberlost ( ), directed by william nigh, featured indiana actress marjorie main in the role of the mean stepmother.[ ] the keeper of the bees was adapted four times as a movie. it was first released a silent film in , starring robert frazer; in as a monogram film starring neil hamilton; in for columbia pictures;[ ] and as keeper of the bees in a adaptation that was loosely based on the original novel. stratton-porter's granddaughter, gene stratton monroe, appeared version in the role of little scout.[ ][ ] later years[edit] in later , after years of years of strenuous work outdoors, battling with the indiana state government to protect the state's wetlands, and worrying over the events of world war i, fifty-four-year-old stratton-porter checked into clifton springs sanitarium and clinic, a health retreat for the famous in new york. she recuperated at the resort for a month before returning to her home at wildflower woods and taking up new challenges as a poet, filmmaker, and editorialist. in , after recovering from a serious bout of influenza and completing homing with the birds ( ), she decided to move to los angeles, california.[ ] southern california's more temperate climate and increased social activities appealed to her. from her california home, stratton-porter continued to write novels and poetry, in addition a series of articles for mccall's magazine. in she founded gene stratton-porter productions, inc., one of the first female-owned studios, and worked with film director, james leo meehan, to create films based on her novels. with increased business dealings and enjoying the company of many writers, artists, sculptors, and musicians, stratton-porter decided to establish her permanent residence in southern california. although she retained her home at sylvan lake in indiana, the porters sold the limberlost cabin in geneva, indiana, in . at the time of her death in , stratton-porter owned wildflower woods in indiana, a year-round home in los angeles, a vacation home on catalina island, and was constructing a mansion in bel air, california.[ ] death and legacy[edit] stratton-porter died on december , , at the age of sixty-one, in los angeles, california, of injuries received in a traffic accident. her car, driven by her chauffeur, collided with a streetcar while she was en route to visit her brother, jerome. stratton-porter was thrown from the vehicle and died at a nearby hospital less than two hours later of a fractured pelvis and crushed chest. her private funeral was held on december at her south serrano street home in hollywood, california. stratton-porter's remains were held in a temporary burial vault until and then interred at hollywood memorial park cemetery.[ ] stratton-porter's husband, charles porter, died in and was buried in his hometown of decatur, indiana; their daughter, jeannette porter meehan, died in california in . in stratton-porter's two grandsons, james and john meehan, arranged to move stratton-porter's remains, along with those of their mother, jeannette porter meehan, to indiana. the women's remains are interred on the grounds of the gene stratton-porter state historic site at sylvan lake.[ ][ ] stratton-porter's two former residences in indiana, the limberlost cabin in geneva and the cabin at wildflower woods near rome city, have been acquired by the state of indiana and designated as state historic sites to honor her work and relate the story of her life. the indiana state museum and historic sites operates the two properties as house museums; both of them are open to the public.[ ][ ] because stratton-porter wrote in advance of her publishing deadlines, mccall's magazine had enough of her material to continue publishing her monthly column, the "gene stratton-porter page," in its magazine until december , three years after her death. good housekeeping and american magazine also published posthumously other articles that stratton-porter had written. in addition, four of her books were published posthumously: two novels, the keeper of the bees ( ) and the magic garden ( ), and two collections of her articles and essays, tales you won't believe ( ) and let us highly resolve ( ).[ ] more recently, indiana university press reissued eight of stratton-porter's novels in the s and s, including a girl of the limberlost, which remains "among her best-loved novels";[ ] kent state university press published a compilation of stratton-porter's poetry, field o’ my dreams: the poetry of gene-stratton porter ( ).[ ] stratton-porter's nature photographs, correspondence, books, and magazine articles, among other materials, are housed at several repositories, including the indiana state library, the indiana state museum, and the indiana historical society in indianapolis; the lilly library at indiana university in bloomington; the bracken library at ball state university in muncie; and the geneva branch of the adams public library in geneva, indiana, and elsewhere.[ ][ ] bertrand f. richards, a stratton-porter biographer, called her "one of the best-selling writers of the first quarter of the twentieth century."[ ] she is best known for her novels and nature books; however, her poetry, children's books, and numerous essays, editorials, and monthly columns for magazines such as mccall's and good housekeeping are not well known today. after her move to southern california in , stratton-porter also became one of hollywood's first female producers and in was among the first women to form her own production company.[ ][ ] stratton-porter, who is remembered for her ambition and individualism, was also a passionate nature lover who encouraged people to explore the nature and the outdoors. she especially loved birds and did extensive studies of moths. among her lasting legacies is her early and outspoken advocacy for nature conservation. stratton-porter supported efforts to preserve wetlands, such as the limberlost swamp, and saving the wild elk at jackson hole, wyoming, from extinction. she also recognized the impact that cutting down trees would have on climate change and encouraged americans to preserve the environment. as the izaak walton league paid tribute to her work in its publication, outdoor america, following her death, "if we can write her epitaph in terms of clean rivers, clean outdoor playgrounds, and clean young hearts, we shall have done what she would have asked."[ ] honors and awards[edit] the adirondack forest preserve service dedicated to stratton-porter a memorial grove of , white pine trees at tongue mountain on lake george, new york, in , shortly after her death.[ ] the american reforestation association organized memorial tree plantings after her death on the grounds of los angeles-area schools.[ ] the college woman's salon of los angeles established an annual poetry award in her honor.[ ] r. r. rowley named a trilobite, pillipsia stratton-porteri, in her honor.[ ] the purdue university calumet campus's porter hall, along with the former elementary school that opened on the site in , was named in her honor.[ ] in stratton-porter's portrait was added to the hoosier heritage portrait collection at the indiana statehouse in indianapolis.[ ] in stratton-porter was inducted into the indiana natural resources foundation's hall of fame (inaugural class) as an early conservationist.[ ] in stratton-porter was inducted into wabash high school's hall of distinction for her contributions to literature, ecology and photography.[ ] stratton-porter's two former residences in indiana, the limberlost cabin in geneva and the cabin at wildflower woods near rome city were designated state historic sites and listed on the national register of historic places. the indiana state museum and historic sites operates the two properties as house museums.[ ][ ] selected published works[edit] stratton-porter's novels, most of them best sellers, became popular in the first quarter of the twentieth century and were widely read.[ ] her twenty-six published books include twelve novels, eight nature studies, two books of poetry, and four collections of stories and children's books.[ ][ ] novels[edit] library resources about gene stratton-porter online books resources in your library resources in other libraries by gene stratton-porter online books resources in your library resources in other libraries the song of the cardinal, freckles, at the foot of the rainbow, a girl of the limberlost, the harvester, laddie, michael o'halloran, a daughter of the land, her father's daughter, the white flag, the keeper of the bees, the magic garden, [ ] nature studies[edit] what i have done with birds, (revised as friends in feathers in .)[ ] birds of the bible, music of the wild, moths of the limberlost, friends in feathers, (a revised and expanded edition of what i have done with birds.)[ ] homing with the birds, wings, tales you won't believe, [ ] poetry[edit] the fire bird, jesus of the emerald, field o’ my dreams: the poetry of gene-stratton porter, [ ][ ] "euphorbia", (published in good housekeeping in three monthly installments from january through march ; it was never published in book form.)[ ] children's books and collected essays[edit] after the flood, birds of the limberlost, morning face, let us highly resolve, film adaptations of novels[edit] eight of stratton-porter's novels have been made into moving pictures.[ ] freckles, directed by marshall neilan ( , based on the novel freckles) michael o'halloran, directed by james leo meehan ( , based on the novel michael o'halloran) a girl of the limberlost, directed by james leo meehan ( , based on the novel a girl of the limberlost) the keeper of the bees, directed by james leo meehan ( , based on the novel the keeper of the bees) laddie, directed by james leo meehan ( , based on the novel laddie) the magic garden, directed by james leo meehan ( , based on the novel the magic garden) the harvester, directed by james leo meehan ( , based on the novel the harvester) freckles, directed by james leo meehan ( , based on the novel freckles) a girl of the limberlost, directed by christy cabanne ( , based on the novel a girl of the limberlost) laddie, directed by george stevens ( , based on the novel laddie) the keeper of the bees, directed by christy cabanne ( , based on the novel the keeper of the bees) freckles, directed by edward killy and william hamilton ( , based on the novel freckles) the harvester, directed by joseph santley ( , based on the novel the harvester) michael o'halloran, directed by karl brown ( , based on the novel michael o'halloran) romance of the limberlost, directed by william nigh ( , based on the novel a girl of the limberlost) laddie, directed by jack hively ( , based on the novel laddie) her first romance, directed by edward dmytryk ( , based on the novel her father's daughter) freckles comes home, directed by jean yarbrough ( , based on a sequel to the novel freckles) the girl of the limberlost, directed by mel ferrer ( , based on the novel a girl of the limberlost) the keeper of the bees, directed by john sturges ( , based on the novel the keeper of the bees) michael o'halloran, directed by john rawlins ( , based on the novel michael o'halloran) freckles, directed by andrew v. mclaglen ( , based on the novel freckles) a girl of the limberlost, directed by burt brinckerhoff ( , tv film, based on the novel a girl of the limberlost) city boy, directed by john kent harrison ( , tv film, based on the novel freckles) biographical play[edit] a song in the wilderness, a one-woman show written by larry gard and first performed in , offers a dramatic exploration of stratton-porter's life and experiences.[ ] the - minute play was written for gard's wife, actress marcia quick gard, and financed by an indiana humanities council grant.[ ] the play toured indiana each spring from through and was performed in numerous indiana towns.[ ] in march the carpenter science theatre company produced a production of the play at the eureka theatre in richmond, virginia, directed by gard and featuring quick in the title role.[ ] a spring performance of the play had been scheduled in the rhoda b. thalhimer theater at the science museum of virginia in richmond, but quick died december , .[ ][ ] kerrigan sullivan,[ ] a richmond-based actress, was cast to play the role of stratton-porter. playwright gard, director jones, and actress sullivan dedicated the subsequent performances to quick's memory. the play was also performed at the cat theater at st. catherine's school for girls in richmond[ ] and theaterlab, also in richmond. in addition, the play was performed at the university of notre dame's debartolo performing arts center in conjunction with the notre dame shakespeare festival and the limberlost theatre company in .[ ] the friends of the limberlost presented the play in fort wayne, indiana, also in .[ ] references[edit] ^ "gene stratton-porter and her limberlost swamp". archived from the original on january , . retrieved january , . ^ a b c d "biographical sketches" in gene stratton porter collection, – (bulk s– s) collection guide. indianapolis: indiana historical society. . retrieved july , . ^ judith reick long ( ). gene stratton-porter: novelist and naturalist. indianapolis: indiana historical society. p.  . isbn  . ^ long, pp. – , ; pamela j. bennett, ed. (september ). "gene stratton-porter" (pdf). the indiana historian. indianapolis: indiana historial bureau: – . retrieved july , .cs maint: extra text: authors list (link) ^ the likely reasons for sixty-year-old mark stratton’s decision to move to wabash, indiana, was leander's untimely death (mark stratton had hoped his young son would take over the family's farming operations) and his wife's declining health (she had contracted typhoid fever when geneva was young). see long, pp. , , , and barbara olenyik morrow ( ). nature's storyteller: the life of gene stratton-porter. indianapolis: indiana historical society. p.  . isbn  - - - - . ^ long, p. ; morrow, p. . ^ bennett, p. ; morrow, pp. – . ^ stratton-porter, gene. “gene stratton-porter: a little story of the life and work and ideals of ‘the bird woman’ .” edited by s.f.e and mary mark ockerbloom, a celebration of women writers, philadelphia: jas. b. rodgers printing co., digital.library.upenn.edu/women/stratton/gene/gene.html. ^ long, pp. – , – ; morrow, p. . ^ because etiquette required a formal introduction, charles did not approach gene directly. instead, he got her name and address through a cousin who knew gene's brother-in-law. two months later, charles wrote gene a letter, inviting her to begin a correspondence with him. she agreed. see long, pp. – , – ; morrow, pp. – . ^ long, p. ; morrow, pp. – . ^ long, pp. – , ; morrow, pp. – . ^ a b albert d. hart jr. "our folk: porter family". albert hart. retrieved january , . ^ linda c. gugin and james e. st. clair, eds. ( ). indiana's : the people who shaped the hoosier state. indianapolis: indiana historical society press. p.  . isbn  - - - - .cs maint: extra text: authors list (link) ^ bennett, pp. , ; long, pp. – , . ^ long, pp. , – ; morrow, p. . ^ long, p. ; morrow, pp. – ; gugin and st. clair, eds., p. . ^ "gene stratton-porter 'limberlost', geneva (adams county)" (pdf). travels in time: hoosiers and the arts. indianapolis: dhpa. retrieved july , . ^ a b c "gene stratton porter cabin" archived - - at the wayback machine, indiana state museum, accessed jan ^ gugin and st. clair, eds., p. ; morrow, pp. , , – . ^ bennett, pp. – ; gugin and st. clair, eds., p. ; long, pp. – , . ^ bennett, p. ; gugin and st. clair, eds., pp. – ; long, pp. – . ^ a b c d e f g h i j "authors: gene stratton-porter". our land, our literature. ball state university. archived from the original on july , . retrieved july , .cs maint: bot: original url status unknown (link) ^ mary e. gaither ( ). introduction. laddie, a true blue story. by gene stratton-porter. bloomington, indiana: indiana university press. pp. ix. isbn  . ^ a b "notable hoosiers: gene stratton-porter". indiana historical society. retrieved july , . ^ bennett, p. ; gugin and st. clair, eds., p. . ^ "indiana state historic architectural and archaeological research database (shaard)" (searchable database). indiana department of natural resources, division of historic preservation and archaeology. retrieved july , . this includes thomas gross. "national register of historic places inventory nomination form: gene stratton porter cabin" (pdf). retrieved july , . and accompanying photographs ^ in five cooperating foundations and organizations purchased a -acre ( -hectare) section of marshland in adams and jay counties in a portion of the former limberlost swamp that they renamed the loblolly wetlands and began work to restore the land and its habitat. see "loblolly marsh preserve" (pdf). retrieved may , . other nearby sites related to stratton-porter include the -acre ( -hectare) limberlost bird sanctuary; a -acre ( -hectare) music of the wild prairie and woods; rainbow bend park on the wabash river; the -acre ( -hectare) munro nature preserve, and the -acre ( -hectare) limberlost swamp wetland preserve. see morrow, pp. – . ^ long, p. – ; morrow, pp. – . ^ bennett, pp. , , ; long, pp. – , ; morrow, pp. , . ^ bennett, pp. , , – ; gugin and st. clair, pp. – ; long, p. . ^ eric grayson (winter ). "limberlost found: indiana's literary legacy in hollywood". traces of indiana and midwestern history. indianapolis: indiana historical society. ( ): . retrieved july , . ^ morrow, pp. , – . ^ "indiana state historic architectural and archaeological research database (shaard)" (searchable database). indiana department of natural resources, division of historic preservation and archaeology. retrieved july , . this includes thomas gross. "national register of historic places inventory nomination form: gene stratton-porter cabin" (pdf). retrieved july , . and accompanying photographs. ^ "gene stratton-porter state historic site rome city, indiana". gene stratton-porter state historic site rome city, indiana. retrieved november , . ^ gugin and st. clair, eds, p. ; long, pp. , . ^ long, pp. – , . ^ bennett, pp. – ; long, pp. – . ^ bennett, pp. – ; long, pp. – , , . ^ bennett, p. ; gugin and st. clair, eds., p. ; long, pp. , . ^ a b c morrow, p. . ^ a b c gugin and st. clair, eds., p. . ^ long, pp. – ; morrow, pp. , – . ^ bennett, p. ; morrow, p. . ^ morrow, p. . ^ long, p. – . ^ long, pp. – , ; morrow, pp. – . ^ a b dawn mitchell (march , ). "gene stratton-porter, naturalist and author". indianapolis star. indianapolis. retrieved july , . ^ long, p. . ^ morrow, p. – . ^ morrow, p. . ^ long, pp. , . ^ long, pp. – ; morrow, pp. , . ^ long, p. ; morrow, p. . ^ morrow, pp. – , . ^ a b c kevin kilbane (october , ). "recently acquired collection of gene stratton-porter letters, photos offer more details of her life". news-sentinel.com. fort wayne, indiana: fort wayne newspapers. archived from the original on april , . retrieved july , . ^ long, pp. – ; morrow, p. . ^ the lead character, linda strong, displays an ugly philosophy regarding japanese immigrants, portraying them as pawns of the japanese government, sent here to "steal" an american education, even though highly educated in japan and far too old for the high school she attends. the japanese are portrayed as copying american inventions, and the japanese villain oka sayye, goes so far as to try to kill a classmate (donald whiting) to prevent being bested in the competition for first place. as encouragement to donald to study harder, linda describes a terrifying future where the other races, being only capable of imitating the innovations of the white man, will learn all the white man knows by studying harder, and by breeding at a higher rate, will remove the white man from his superior position in the world.[citation needed] ^ long, p. ; morrow, pp. – . ^ morrow, p. . ^ bennett, pp. – ; long, pp. – ; morrow, p. . ^ long, pp. – . ^ long, p. . ^ long, pp. – ; morrow, pp. , – . ^ long, p. ; morrow, pp. , . ^ morrow, p. . ^ bennett, p. ; long, p. – ; morrow, pp. , – . ^ bennett, p. ; long, p. ; morrow, p. . ^ long, pp. , – ; morrow, p. . ^ bennett, p. ; morrow, pp. – . ^ long p. ; morrow, p. . ^ long, pp. – ; morrow, p. . ^ although her father was a methodist minister and she grew up in a christian household, stratton-porter did not regularly attend church as an adult and was not a member of an organized religious congregation. see long, pp. , – ; morrow, p. . ^ long, pp. – ; morrow, pp. , . ^ bennett, p. . ^ bennett, p. ; long, pp. , , ; morrow, pp. , . ^ long, pp. , . ^ bennett, p. ; long, p. ; morrow, pp. , , – . ^ bennett, p. . ^ gugin and st. clair, eds., p. ; long, p. . ^ bennett, pp. , . ^ morrow, pp. – . ^ long, pp. and , note . ^ a b c gugin and st. clair, eds., p. ; morrow, p. . ^ bennett, pp. – ; long, pp. , ; morrow, pp. – . ^ bennett, p. . ^ a b grayson, p. . ^ morrow, p. . ^ grayson, p. . ^ a girl of the limberlost, retrieved - - ^ grayson, pp. – . ^ a b movies based on books by gene stratton-porter ( - ), (geneva, indiana: friends of the limberlost) ^ morrow, p. . ^ stratton-porter had given her granddaughter, gene monroe, the nickname of little scout. see "the indiana historian" (pdf). ^ bennett, pp. , – ; long, pp. , , – ; morrow, p. . ^ bennett, pp. – ; long, pp. – , – . ^ long, p. ; morrow, pp. – . ^ bennett, p. ; long, p. ; morrow, p. . ^ a b "new documents, letters and photographs round out gene stratton-porter collection". inperspective. indianapolis: indiana historical society. ( ): . january . ^ morrow, pp. – . ^ morrow, p. . ^ morrow, p. . ^ morrow, pp. – . ^ bennett, p. . ^ morrow, pp. – . ^ a b c long, pp. – . ^ long, p. . ^ robin biesen. "history helps dedicate puc building". retrieved august , . see also: "facilities services, - | archive repository". pucarch.purduecal.edu. archived from the original on december , . retrieved november , . ^ morrow, p. . ^ "indiana natural resources foundation". www.in.gov. retrieved - - . ^ "hall of distinction – district – wabash city schools". www.apaches.k .in.us. retrieved - - . ^ "national register information system". national register of historic places. national park service. july , . ^ r. e. banta and bruce rogers ( ). indiana authors and their books, - . crawfordsville, indiana: wabash college. p.  . oclc  . ^ a b morrow, p. . ^ stratton-porter, gene, and mary dejong obuchowski ( ). field o' my dreams: the collected poems of gene stratton-porter. kent state university. pp. kent, ohio. isbn  .cs maint: multiple names: authors list (link) ^ morrow, p. . ^ "touring production brings naturalist and author gene stratton-porter to life : related articles | ooyuz". www.ooyuz.com. retrieved june , . ^ kilbane, kevin. "touring production brings naturalist and author gene stratton-porter to life - news-sentinel.com". news-sentinel.com. archived from the original on may , . retrieved june , . ^ cities included lafayette, muncie, anderson, calumet, chesterton, elkhart, evansville, fort wayne, frankfort, franklin, goshen, greenfield, greensburg, huntington, kendalville, la porte, lawrenceburg, lebanon, merrillville, michigan city, north manchester, plymouth, porter, wabash, portland, rome city, and terre haute.[citation needed] ^ griset, rich. "after helping the science museum find its voice, the longtime artistic director of the carpenter science theatre company retires". style weekly. retrieved - - . ^ times-dispatch, holly prestidge richmond. ""a song in the wilderness" goes on without its star". richmond times-dispatch. retrieved june . ^ the play's director, jacqueline jones, assumed that gard, the actress’s widower, would cancel the performance; however, he asked jones to hold auditions for the stratton-porter role, the play’s only character. see tribune, andrew s. hughes south bend. "play about gene stratton-porter staged as tribute to its original star". south bend tribune. retrieved june . ^ "photos: "a song in the wilderness"". richmond times-dispatch. retrieved june . ^ "a song in the wilderness | cat theatre". www.cattheatre.com. ^ tribune, andrew s. hughes south bend. "play about gene stratton-porter staged as tribute to its original star". south bend tribune. retrieved june , . ^ "friends of the limberlost present "a song of the wilderness"". decatur daily democrat. retrieved june , . external links[edit] wikimedia commons has media related to gene stratton-porter. "our folk: porter family genealogy", albert d. hart jr. gene stratton-porter photo, ourfolk web works by gene stratton-porter at project gutenberg works by or about gene stratton-porter at internet archive works by gene stratton-porter at librivox (public domain audiobooks) works by gene stratton-porter at open library "gene stratton-porter memorial society", gene stratton-porter state historic site, rome city, indiana gene stratton-porter state historic site, facebook page gene stratton-porter, a girl of the limberlost, online text "after limberlost: gene stratton-porter's life in california" on youtube (a short documentary), produced by almost fairytales films "gene stratton-porter" on youtube (indiana bicentennial minute, ), indiana historical society "gene stratton-porter: voice of the limberlost" on youtube (a documentary; produced by wipb-tv), ball state university, libraries "historic footage of gene stratton-porter" on youtube, indiana state museum and historic sites "there is a memoir or a biography" on project gutenberg. [ ] v t e gene stratton-porter novels the song of the cardinal ( ) freckles ( ) a girl of the limberlost ( ) at the foot of the rainbow ( ) the harvester ( ) laddie: a true blue story ( ) michael o'halloran ( ) a daughter of the land ( ) her father's daughter ( ) the white flag ( ) the keeper of the bees ( ) the magic garden ( ) other works nature studies poetry adaptations freckles freckles ( film) freckles ( film) freckles ( film) freckles ( film) city boy ( film) a girl of the limberlost a girl of the limberlost ( film) a girl of the limberlost ( film) romance of the limberlost ( film) the girl of the limberlost ( film) a girl of the limberlost ( film) the harvester the harvester ( film) the harvester ( film) laddie: a true blue story laddie ( film) laddie ( film) laddie ( film) michael o'halloran michael o'halloran ( film) michael o'halloran ( film) michael o'halloran ( film) her father's daughter her first romance ( film) the keeper of the bees the keeper of the bees ( film) the keeper of the bees ( film) keeper of the bees ( film) the magic garden the magic garden ( film) historic sites gene stratton-porter cabin (rome city, indiana) limberlost state historic site (geneva, indiana) works inspired by freckles comes home ( novel) freckles comes home ( film) a song in the wilderness (biographical play) authority control general integrated authority file (germany) isni viaf worldcat national libraries norway spain france (data) catalonia united states latvia japan czech republic israel korea netherlands poland art research institutes photographers' identities scientific databases cinii (japan) other faceted application of subject terminology social networks and archival context sudoc (france) trove (australia) retrieved from "https://en.wikipedia.org/w/index.php?title=gene_stratton-porter&oldid= " categories: births deaths american naturalists burials at hollywood forever cemetery writers from indiana american women writers nature photographers people from wabash county, indiana people from adams county, indiana people from noble county, indiana american women film producers hidden categories: cs maint: extra text: authors list webarchive template wayback links cs maint: bot: original url status unknown all articles with unsourced statements articles with unsourced statements from july articles using nrisref without a reference number cs maint: multiple names: authors list commons category link from wikidata articles with project gutenberg links articles with internet archive links articles with librivox links open library id different from wikidata articles with open library links wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with bibsys identifiers wikipedia articles with bne identifiers wikipedia articles with bnf identifiers wikipedia articles with cantic identifiers wikipedia articles with lccn identifiers wikipedia articles with lnb identifiers wikipedia articles with ndl identifiers wikipedia articles with nkc identifiers wikipedia articles with nli identifiers wikipedia articles with nlk identifiers wikipedia articles with nta identifiers wikipedia articles with plwabn identifiers wikipedia articles with pic identifiers wikipedia articles with cinii identifiers wikipedia articles with fast identifiers wikipedia articles with snac-id identifiers wikipedia articles with sudoc identifiers wikipedia articles with trove identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons wikisource languages تۆرکجه español français مصرى 日本語 polski svenska edit links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement falvey memorial library :: the collection of blogs published by falvey memorial library, villanova university skip navigation falvey memorial library visit / apply / give my library account collections research services using the library about falvey memorial library search everything books & media title journal title author subject call number isbn/issn tag articles & more article title article author other libraries (ill) ill title ill author ill subject ill call number ill isbn/issn library website guides digital library search for books, articles, library site, almost anything advanced you are exploring: home > blogs falvey memorial library blog falvey library blogs library staff decks the halls at ra fair august , library newsfalvey memorial library, ra fair, residence hall, villanova university ... read more content roundup – first two weeks – august august , blue electrode: sparking between silicon and papercontent roudup just a few new dime novels and story papers for your reading and research needs as the summer heat continues! also completed this is the full and annotated transcription of the joseph mcgarrity collection minute book of the friends of irish: ... read more meet new e-zborrow platform: reshare august , library news the new e-zborrow platform, reshare, allows falvey memorial library to improve service to our patrons by using the latest library technology. the open source code gives libraries more customization options and allows them to more quickly adapt to ... read more villanovan patrick tiernan offers lesson in resilience at the olympics august , library newscross country, falvey memorial library, patrick tiernan, summer olympics, villanova university by shawn proctor every olympics is filled with storied rises to victory, when an athlete snatches the gold despite seemingly insurmountable odds. yet for every medal there’s another tale. last second losses. injuries. and, for villanovan ... read more foto friday: chair-ish the weekend august , library newsfoto friday, summer   sit back and relax, wildcats! enjoy these last few weeks of summer break! #fotofriday kallie stahl ’ ma is communication and marketing specialist at falvey memorial library.       ... read more welcome to falvey: emily horn joins resource management and description august , library newsemily horn, falvey staff, resource management & description emily horn recently joined resource management and description as resource management and description coordinator. helping to build and cultivate falvey library’s collection, horn assists with acquisitions, licensing, description, discovery, ... read more audit analytics accounting & oversight august , library newsaudit analytics, business, falvey memorial library, resources the library in partnership with the villanova school of business has added the accounting & oversight module to our basic audit analytics subscription. audit analytics structured data facilitate scholarly research on governance, shareholder ... read more euromonitor (passport) trial august , library news, resourceseuromonitor passport, falvey memorial library, resources, trial resources we currently have a trial to three new modules of euromonitor (passport): cities, industrial and mobility. at its core, euromonitor provides data and analysis by and for consumer goods and services markets, internationally. our basic subscription ... read more service alert—ezproxy upgrade: / august , library newsdatabases, ezproxy, falvey library, law library, service alert some falvey library and law library databases may be temporarily unavailable on wednesday, aug. , between : – : a.m., due to routine server maintenance. if you encounter a problem during this time, please keep trying; we will endeavor to ... read more search falvey library blogs categories blue electrode: sparking between silicon and paper library news resources technology developments feeds content comments meta log in   last modified: december , lancaster ave., villanova, pa . . contact directions privacy & security diversity higher education act my nova villanova a-z directory work at villanova accessibility ask us: live chat none libraries – erin white libraries – erin white library technology, ux, the web, bikes, #rva talk: using light from the dumpster fire to illuminate a more just digital world this february i gave a lightning talk for the richmond design group. my question: what if we use the light from the dumpster fire of to see an equitable, just digital world? how can we change our thinking to build the future web we need? presentation is embedded here; text of talk is below. [&# ;] podcast interview: names, binaries and trans-affirming systems on legacy code rocks! in february i was honored to be invited to join scott ford on his podcast legacy code rocks!. i&# ;m embedding the audio below. view the full episode transcript — thanks to trans-owned deep south transcription services! i&# ;ve pulled out some of the topics we discussed and heavily edited/rearranged them for clarity. names in systems legal [&# ;] trans-inclusive design at a list apart i am thrilled and terrified to say that i have an article on trans-inclusive design out on a list apart today. i have read a list apart for years and have always seen it as the site for folks who make websites, so it is an honor to be published there. coming out as nonbinary at work this week, after years of working at vcu libraries, i have been letting my colleagues know that i&# ;m nonbinary. response from my boss, my team, and my colleagues has been so positive, and has made this process so incredibly easy. i didn&# ;t really have a template for a coming-out message, so ended up writing [&# ;] what it means to stay seven years ago last month i interviewed for my job at vcu. i started work a few months later, assuming i&# ;d stick around for a couple of years then move on to my next academic library job. instead i found myself signing closing papers on a house on my sixth work anniversary, having decided to [&# ;] back-to-school mobile snapshot this week i took a look at mobile phone usage on the vcu libraries website for the first couple weeks of class and compared that to similar time periods from the past couple years. here&# ;s some data from the first week of class through today. note that mobile is  . % of web traffic. to round [&# ;] recruiting web workers for your library in the past few years i&# ;ve created a couple of part-time, then full-time, staff positions on the web team at vcu libraries. we now have a web designer and a web developer who&# ;ve both been with us for a while, but for a few years it was a revolving door of hires. so let&# ;s just say i&# ;ve hired lots [&# ;] easier access for databases and research guides at vcu libraries today vcu libraries launched a couple of new web tools that should make it easier for people to find or discover our library&# ;s databases and research guides. this project&# ;s goal was to help connect “hunters” to known databases and help “gatherers” explore new topic areas in databases and research guides . our web redesign task force [&# ;] why this librarian supports the ada initiative this week the ada initiative is announcing a fundraising drive just for the library community. i&# ;m pitching in, and i hope you will, too. the ada initiative&# ;s mission is to increase the status and participation of women in open technology and culture. the organization holds adacamps, ally workshops for men, and impostor syndrome trainings; and [&# ;] a new look for search at vcu libraries this week we launched a new design for vcu libraries search (our instance of ex libris&# ; primo discovery system). the guiding design principles behind this project: mental models: bring elements of the search interface in line with other modern, non-library search systems that our users are used to. in our case, we looked to e-commerce websites [&# ;] export of cryptography from the united states - wikipedia export of cryptography from the united states from wikipedia, the free encyclopedia jump to navigation jump to search transfer from the united states to another country of devices and technology related to cryptography this article's lead section may be too long for the length of the article. please help by moving some material from it into the body of the article. please read the layout guide and lead section guidelines to ensure the section will still be inclusive of all essential details. please discuss this issue on the article's talk page. (february ) export-restricted rsa encryption source code printed on a t-shirt made the t-shirt an export-restricted munition, as a freedom of speech protest against u.s. encryption export restrictions (back side).[ ] changes in the export law means that it is no longer illegal to export this t-shirt from the u.s., or for u.s. citizens to show it to foreigners. export of cryptographic technology and devices from the united states was severely restricted by u.s. law until . the law gradually became eased until around , but some restrictions still remain today. since world war ii, many governments, including the u.s. and its nato allies, have regulated the export of cryptography for national security reasons, and, as late as , cryptography was on the u.s. munitions list as an auxiliary military equipment.[ ] due to the enormous impact of cryptanalysis in world war ii, these governments saw the military value in denying current and potential enemies access to cryptographic systems. since the u.s. and u.k. believed they had better cryptographic capabilities than others, their intelligence agencies tried to control all dissemination of the more effective crypto techniques. they also wished to monitor the diplomatic communications of other nations, including those emerging in the post-colonial period and whose position on cold war issues was vital.[ ] the first amendment made controlling all use of cryptography inside the u.s. illegal, but controlling access to u.s. developments by others was more practical — there were no constitutional impediments. accordingly, regulations were introduced as part of munitions controls which required licenses to export cryptographic methods (and even their description); the regulations established that cryptography beyond a certain strength (defined by algorithm and length of key) would not be licensed for export except on a case-by-case basis. this policy was also adopted elsewhere for various reasons. the development and public release of data encryption standard (des) and asymmetric key techniques in the s, the rise of the internet, and the willingness of some to risk and resist prosecution, eventually made this policy impossible to enforce, and by the late s it was being relaxed in the u.s., and to some extent elsewhere (e.g., france). as late as , nsa officials in the us were concerned that the widespread use of strong encryption will frustrate their ability to provide sigint regarding foreign entities, including terrorist groups operating internationally. nsa officials anticipated that the american encryption software backed by an extensive infrastructure, when marketed, was likely to become a standard for international communications.[ ] in , louis freeh, then the director of the fbi, said for law enforcement, framing the issue is simple. in this time of dazzling telecommunications and computer technology where information can have extraordinary value, the ready availability of robust encryption is essential. no one in law enforcement disputes that. clearly, in today's world and more so in the future, the ability to encrypt both contemporaneous communications and stored data is a vital component of information security. as is so often the case, however, there is another aspect to the encryption issue that if left unaddressed will have severe public safety and national security ramifications. law enforcement is in unanimous agreement that the widespread use of robust non-key recovery encryption ultimately will devastate our ability to fight crime and prevent terrorism. uncrackable encryption will allow drug lords, spies, terrorists and even violent gangs to communicate about their crimes and their conspiracies with impunity. we will lose one of the few remaining vulnerabilities of the worst criminals and terrorists upon which law enforcement depends to successfully investigate and often prevent the worst crimes. for this reason, the law enforcement community is unanimous in calling for a balanced solution to this problem.[ ] contents history . cold war era . pc era . current status u.s. export rules . terminology . classification see also references external links history[edit] cold war era[edit] in the early days of the cold war, the u.s. and its allies developed an elaborate series of export control regulations designed to prevent a wide range of western technology from falling into the hands of others, particularly the eastern bloc. all export of technology classed as 'critical' required a license. cocom was organized to coordinate western export controls. two types of technology were protected: technology associated only with weapons of war ("munitions") and dual use technology, which also had commercial applications. in the u.s., dual use technology export was controlled by the department of commerce, while munitions were controlled by the state department. since in the immediate post wwii period the market for cryptography was almost entirely military, the encryption technology (techniques as well as equipment and, after computers became important, crypto software) was included as "category xi - miscellaneous articles" and later "category xiii - auxiliary military equipment" item into the united states munitions list on november , . the multinational control of the export of cryptography on the western side of the cold war divide was done via the mechanisms of cocom. by the s, however, financial organizations were beginning to require strong commercial encryption on the rapidly growing field of wired money transfer. the u.s. government's introduction of the data encryption standard in meant that commercial uses of high quality encryption would become common, and serious problems of export control began to arise. generally these were dealt with through case-by-case export license request proceedings brought by computer manufacturers, such as ibm, and by their large corporate customers. pc era[edit] encryption export controls became a matter of public concern with the introduction of the personal computer. phil zimmermann's pgp cryptosystem and its distribution on the internet in was the first major 'individual level' challenge to controls on export of cryptography. the growth of electronic commerce in the s created additional pressure for reduced restrictions. videocipher ii also used des to scramble satellite tv audio. in , non-encryption use of cryptography (such as access control and message authentication) was removed from export control with a commodity jurisdiction. [ ] in , an exception was formally added in the usml for non-encryption use of cryptography (and satellite tv descramblers) and a deal between nsa and the software publishers association made -bit rc and rc encryption easily exportable using a commodity jurisdiction with special " -day" and " -day" review processes (which transferred control from the state department to the commerce department). at this stage western governments had, in practice, a split personality when it came to encryption; policy was made by the military cryptanalysts, who were solely concerned with preventing their 'enemies' acquiring secrets, but that policy was then communicated to commerce by officials whose job was to support industry. shortly afterward, netscape's ssl technology was widely adopted as a method for protecting credit card transactions using public key cryptography. netscape developed two versions of its web browser. the "u.s. edition" supported full size (typically -bit or larger) rsa public keys in combination with full size symmetric keys (secret keys) ( -bit rc or des in ssl . and tls . ). the "international edition" had its effective key lengths reduced to bits and bits respectively (rsa_export with -bit rc or rc in ssl . and tls . ).[ ] acquiring the 'u.s. domestic' version turned out to be sufficient hassle that most computer users, even in the u.s., ended up with the 'international' version,[ ] whose weak -bit encryption can currently be broken in a matter of days using a single computer. a similar situation occurred with lotus notes for the same reasons. legal challenges by peter junger and other civil libertarians and privacy advocates, the widespread availability of encryption software outside the u.s., and the perception by many companies that adverse publicity about weak encryption was limiting their sales and the growth of e-commerce, led to a series of relaxations in us export controls, culminating in in president bill clinton signing the executive order transferring the commercial encryption from the munition list to the commerce control list. furthermore, the order stated that, "the software shall not be considered or treated as 'technology'" in the sense of export administration regulations. the commodity jurisdiction process was replaced with a commodity classification process, and a provision was added to allow export of -bit encryption if the exporter promised to add "key recovery" backdoors by the end of . in , the ear was changed to allow -bit encryption (based on rc , rc , rc , des or cast) and -bit rsa to be exported without any backdoors, and new ssl cipher suites were introduced to support this (rsa_export with -bit rc or des). in , the department of commerce implemented rules that greatly simplified the export of commercial and open source software containing cryptography, including allowing the key length restrictions to be removed after going through the commodity classification process (to classify the software as "retail") and adding an exception for publicly available encryption source code.[ ] current status[edit] this section needs to be updated. please help update this article to reflect recent events or newly available information. (october ) as of [update], non-military cryptography exports from the u.s. are controlled by the department of commerce's bureau of industry and security.[ ] some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" and terrorist organizations. militarized encryption equipment, tempest-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license[ ](pp.  – ). furthermore, encryption registration with the bis is required for the export of "mass market encryption commodities, software and components with encryption exceeding bits" ( fr ). in addition, other items require a one-time review by, or notification to, bis prior to export to most countries.[ ] for instance, the bis must be notified before open-source cryptographic software is made publicly available on the internet, though no review is required.[ ] export regulations have been relaxed from pre- standards, but are still complex.[ ] other countries, notably those participating in the wassenaar arrangement,[ ] have similar restrictions.[ ] u.s. export rules[edit] u.s. non-military exports are controlled by export administration regulations (ear), a short name for the u.s. code of federal regulations (cfr) title chapter vii, subchapter c. encryption items specifically designed, developed, configured, adapted or modified for military applications (including command, control and intelligence applications) are controlled by the department of state on the united states munitions list. terminology[edit] encryption export terminology is defined in ear part . .[ ] in particular: encryption component is an encryption commodity or software (but not the source code), including encryption chips, integrated circuits etc. encryption items include non-military encryption commodities, software, and technology. open cryptographic interface is a mechanism which is designed to allow a customer or other party to insert cryptographic functionality without the intervention, help or assistance of the manufacturer or its agents. ancillary cryptography items are the ones primarily used not for computing and communications, but for digital right management; games, household appliances; printing, photo and video recording (but not videoconferencing); business process automation; industrial or manufacturing systems (including robotics, fire alarms and hvac); automotive, aviation and other transportation systems. export destinations are classified by the ear supplement no. to part into four country groups (a, b, d, e) with further subdivisions;[ ] a country can belong to more than one group. for the purposes of encryption, groups b, d: , and e: are important: b is a large list of countries that are subject to relaxed encryption export rules d: is a short list of countries that are subject to stricter export control. notable countries on this list include china and russia e: is a very short list of "terrorist-supporting" countries (as of , includes five countries; previously contained six countries and was also called "terrorist " or t- ) the ear supplement no. to part (commerce country chart) contains the table with country restrictions.[ ] if a line of table that corresponds to the country contains an x in the reason for control column, the export of a controlled item requires a license, unless an exception can be applied. for the purposes of encryption, the following three reasons for control are important: ns national security column at anti-terrorism column ei encryption items is currently same as ns classification[edit] for export purposes each item is classified with the export control classification number (eccn) with the help of the commerce control list (ccl, supplement no. to the ear part ). in particular:[ ] a systems, equipment, electronic assemblies, and integrated circuits for "information security. reasons for control: ns , at . a "mass market" encryption commodities and other equipment not controlled by a . reason for control: at . b equipment for development or production of items classified as a , b , d or e . reasons for control: ns , at . d encryption software. reasons for control: ns , at . used to develop, produce, or use items classified as a , b , d supporting technology controlled by e modeling the functions of equipment controlled by a or b used to certify software controlled by d d encryption software not controlled by d . reasons for control: at . e technology for the development, production or use of equipment controlled by a or b or software controlled by d . reasons for control: ns , at . e technology for the x items. reasons for control: at . an item can be either self-classified, or a classification ("review") requested from the bis. a bis review is required for typical items to get the a or d classification. see also[edit] bernstein v. united states denied trade screening junger v. daley restrictions on the import of cryptography freak crypto wars references[edit] ^ "munitions t-shirt". ^ department of state -- international traffic in arms regulations, april , , sec . ^ kahn, the codebreakers, ch. ^ the encryption debate: intelligence aspects. see reference below, p. ^ statement of louis j. freeh, director, federal bureau of investigation before the senate judiciary committee. july , ^ "fortify for netscape". www.fortify.net. retrieved dec . ^ "january , archive of the netscape communicator . download page showing a more difficult path to download -bit version". archived from the original on september , . retrieved - - .cs maint: bot: original url status unknown (link) ^ "revised u.s. encryption export control regulations". epic copy of document from u.s. department of commerce. january . retrieved - - . ^ a b c d e commerce control list supplement no. to part category part - info. security ^ "u. s. bureau of industry and security - notification requirements for "publicly available" encryption source code". bis.doc.gov. - - . archived from the original on - - . retrieved - - . ^ participating states archived - - at archive.today the wassenaar arrangement ^ wassenaar arrangement on export controls for conventional arms and dual-use goods and technologies: guidelines & procedures, including the initial elements the wassenaar arrangement, december ^ "ear part " (pdf). archived from the original (pdf) on - - . retrieved - - . ^ "ear supplement no. to part " (pdf). archived from the original (pdf) on - - . retrieved - - . ^ "ear supplement no. to part " (pdf). archived from the original (pdf) on - - . retrieved - - . external links[edit] crypto law survey bureau of industry and security — an overview of the us export regulations can be found in the licensing basics page. whitfield diffie and susan landau, the export of cryptography in the th and the st centuries. in karl de leeuw, jan bergstra, ed. the history of information security. a comprehensive handbook. elsevier, . p. encryption export controls. crs report for congress rl . congressional research service, ˜the library of congress. the encryption debate: intelligence aspects. crs report for congress - f. congressional research service, ˜the library of congress. encryption technology: congressional issues crs issue brief for congress ib . congressional research service, ˜the library of congress. cryptography and liberty . an international survey of encryption policy. electronic privacy information center. washington, dc. national research council, cryptography's role in securing the information society. national academy press, washington, d.c. (full text link is available on the page). the evolution of us government restrictions on using and exporting encryption technologies (u), micheal schwartzbeck, encryption technologies, circa , formerly top secret, approved for release by nsa with redactions september , , c v t e tls and ssl protocols and technologies transport layer security / secure sockets layer (tls/ssl) datagram transport layer security (dtls) server name indication (sni) application-layer protocol negotiation (alpn) dns-based authentication of named entities (dane) dns certification authority authorization (caa) https http strict transport security (hsts) http public key pinning (hpkp) ocsp stapling opportunistic tls perfect forward secrecy public-key infrastructure automated certificate management environment (acme) certificate authority (ca) ca/browser forum certificate policy certificate revocation list (crl) domain-validated certificate (dv) extended validation certificate (ev) online certificate status protocol (ocsp) public key certificate public-key cryptography public key infrastructure (pki) root certificate self-signed certificate see also domain name system security extensions (dnssec) internet protocol security (ipsec) secure shell (ssh) history export of cryptography from the united states server-gated cryptography implementations bouncy castle boringssl botan bsafe cryptlib gnutls jsse libressl matrixssl mbed tls nss openssl s n schannel ssleay stunnel wolfssl notaries certificate transparency convergence https everywhere perspectives project vulnerabilities theory man-in-the-middle attack padding oracle attack cipher bar mitzvah attack protocol beast breach crime drown logjam poodle (in regards to ssl . ) implementation certificate authority compromise random number generator attacks freak goto fail heartbleed lucky thirteen attack poodle (in regards to tls . ) kazakhstan mitm attack retrieved from "https://en.wikipedia.org/w/index.php?title=export_of_cryptography_from_the_united_states&oldid= " categories: computer law export and import control of cryptography united states trade policy transport layer security hidden categories: cs maint: bot: original url status unknown webarchive template archiveis links articles with short description short description matches wikidata wikipedia introduction cleanup from february all pages needing cleanup articles covered by wikiproject wikify from february all articles covered by wikiproject wikify wikipedia articles in need of updating from october all wikipedia articles in need of updating articles containing potentially dated statements from all articles containing potentially dated statements navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages 日本語 中文 edit links this page was last edited on january , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement algorand (cryptocurrency platform) - wikipedia algorand (cryptocurrency platform) from wikipedia, the free encyclopedia jump to navigation jump to search cryptocurrency algorand code algo development original author(s) silvio micali white paper https://arxiv.org/abs/ . initial release april code repository https://github.com/algorand development status active written in go developer(s) algorand, inc. website https://www.algorand.com/ ledger ledger start june block time . sec block explorer https://algoexplorer.io/ circulating supply , , , . algo (as st jun ) supply limit , , , algo algorand is a blockchain-based cryptocurrency platform that aims to be secure, scalable, and decentralized.[ ] the algorand platform supports smart contract functionality,[ ] and its consensus algorithm is based on proof-of-stake principles and a byzantine agreement protocol.[ ][ ][ ] algorand's native cryptocurrency is called algo.[ ] contents history and development design . consensus algorithm . smart contracts use cases references history and development[edit] the development of the algorand platform is overseen by algorand, inc., a private corporation based in boston. it was founded in by silvio micali, a professor at mit.[ ][ ] the algorand test network was launched to the public in april ,[ ] and the main network was launched in june .[ ] design[edit] consensus algorithm[edit] in the algorand network, the consensus algorithm is permissionless, and all users who hold an algo balance can participate. the consensus algorithm works in rounds, with each round made up of two phases. the first phase is the block proposal phase, during which blocks are proposed as the new block; the second phase is the block finalization phase, during which a vote on the proposed blocks is taken.[ ] the first phase (the block proposal phase) uses proof of stake principles. during this phase, a committee of users in the system is selected randomly, though in a manner that is weighted, to propose the new block. the selection of the committee is done via a process called “cryptographic sortition.” in cryptographic sortition, there is not a central authority that designates who the members of the committee are and then communicates that information across the network; rather, each user determines whether they are on the committee or not by locally executing a verifiable random function (vrf). if the vrf indicates that the user is chosen, the vrf returns a cryptographic proof that can be used to verify that the user is on the committee. only a given user knows whether they are on the committee, unless/until they send a message to other users indicating that they are. the likelihood that a given user will be on the committee is influenced by the “stake” (i.e., the number of algo tokens) held by that user, in proportion to the size of the user's stake.[ ][ ][ ] after determining that they are on the block selection committee, a user builds a proposed block and disseminates it to the network for review/analysis during the second phase. the user includes the cryptographic proof from the vrf in their proposed block, which demonstrates that the user was in fact an eligible committee member.[ ][ ] in the second phase (the block finalization phase), a byzantine agreement protocol (called “ba⋆”) is used to vote on the proposed blocks. in this second phase, cryptographic sortition as described above is again used to determine a committee; this second-phase voting committee will be different from the committee from the first phase, though it is possible that there could be overlap in membership between the two committees. when users have determined that they are in this second-phase voting committee, they analyze the proposed blocks they have received (this will include verifying that they were in fact proposed by users from the first-phase committee) and vote on whether any of the blocks should be adopted or not. if the voting committee achieves consensus on a new block, then the new block is disseminated across the network as the new block.[ ][ ][ ] the algorand consensus algorithm possess the characteristic of “player replaceability”; i.e., as noted above, membership in the different committees (in both the block proposal and block finalization phase) changes every time the phase is run. this protects users against targeted attacks, as an attacker will not know in advance which users are going to be in a committee.[ ] algorand is resilient against arbitrary partitions, also known as asynchronous safety. two different blocks cannot reach consensus in the same round, i.e. it is mathematically guaranteed that algorand will not fork.[ ] the asynchronous safety has also been formally verified by runtime verification inc. and compared to their previous verification models, the model also accounts for timing issues and adversary actions, e.g., when the adversary has control over message delivery.[ ] smart contracts[edit] algorand supports two types of smart contracts: stateless smart contracts and stateful smart contracts. stateless smart contracts are intended for the purpose of authorizing transactions; stateful smart contracts store data persistently and can be used for broader purposes.[ ] algorand smart contracts can be written in a programming language called transaction execution approval language (teal). teal is a bytecode-based stack language, with a programming interface for python that is called pyteal. while some smart contract programming models are turing-complete (for example, solidity is turing-complete), the algorand smart contracts model is not turing-complete. the algorand smart contracts model does support transaction atomicity.[ ] in some other blockchain systems, smart contracts are used to define user-defined assets; for example, in ethereum, smart contracts implement the erc and erc interfaces to define new assets. in algorand, in contrast, user-defined assets are supported natively, and algorand smart contracts are able to manipulate user-defined assets (for example, by transferring ownership of given amounts of them) using built-in transaction types.[ ] use cases[edit] in march , the società italiana degli autori ed editori (siae), the italian copyright collecting agency, uploaded over four million non-fungible tokens (nfts) to the algorand blockchain, representing copyrights in works produced by siae's members.[ ] in february , ditto music announced the launch of a project on the algorand network called opulous; opulous will be a decentralized finance (defi) loan pool, where loans will be guaranteed against artists’ past streaming revenues and artists’ copyrights will be held as collateral.[ ] in june , republic, a company that facilitates crowdfunding campaigns for startups and small-to-medium businesses (smbs), issued a profit-sharing token on the algorand platform.[ ] in march there was the announcement that marshall islands will issue their "sovereign" digital currency using the algorand network as a backbone.[ ] in february , planetwatch, a spinoff of cern, announced a program involving the deployment of a global network of air quality sensors, where the sensors record measurement data onto the algorand blockchain.[ ] in , two stablecoins, tether and usd coin (usdc), were launched on the algorand network.[ ][ ] in may , securitize, a company that enables investment in private capital markets, launched two cryptocurrency yield funds and announced they would be recorded using algorand's blockchain.[ ] references[edit] ^ a b c d e f lepore, cristian; ceria, michela; visconti, andrea; rao, udai pratap; shah, kaushal arvindbhai; zanolini, luca ( october ). "a survey on blockchain consensus with a performance comparison of pow, pos and pure pos". mathematics. ( ): . doi: . /math . ^ a b c d e bartoletti, massimo ( ). "a formal model of algorand smart contracts" (pdf). financial cryptography and data security . arxiv: . . ^ a b c d xiao, y.; zhang, n.; lou, w.; hou, y. t. ( january ). "a survey of distributed consensus protocols for blockchain networks". ieee communications surveys and tutorials. ( ): – . arxiv: . . doi: . /comst. . . issn  - x. s cid  . ^ a b c d wan, shaohua; li, meijun; liu, gaoyang; wang, chen ( - - ). "recent advances in consensus protocols for blockchain: a survey". wireless networks. ( ): – . doi: . /s - - - . issn  - . ^ zhao, helen ( - - ). "bitcoin and blockchain consume an exorbitant amount of energy. these engineers are trying to change that". cnbc. retrieved - - . ^ "mit professor debuts high-speed blockchain payments platform algorand". venturebeat. - - . retrieved - - . ^ "algo vc fund raises $ m to fast-track its own cryptocurrency". www.bizjournals.com. retrieved - - . ^ "bahrain's shariah review bureau certifies blockchain firm algorand as shariah compliant". crowdfund insider. - - . retrieved - - . ^ chen, jing; micali, silvio ( - - ). "algorand: a secure and efficient distributed ledger". theoretical computer science. : – . doi: . /j.tcs. . . . issn  - . ^ alturki, musab a.; chen, jing; luchangco, victor; moore, brandon; palmskog, karl; peña, lucas; roşu, grigore ( ). "towards a verified model of the algorand consensus protocol in coq". formal methods. fm international workshops. lecture notes in computer science. . pp.  – . arxiv: . . doi: . / - - - - _ . isbn  - - - - . s cid  . ^ "il diritto d'autore diventa asset digitale su blockchain". il sole ore (in italian). retrieved - - . ^ "ditto launches opulous platform to help artists access funding without the need for traditional banks". music business worldwide. - - . retrieved - - . ^ "crypto push by republic platform sparked by new token". bloomberg.com. - - . retrieved - - . ^ "unlocking the potential of blockchain technology". mit news | massachusetts institute of technology. retrieved - - . ^ "blockchain al servizio dell'ambiente: un registro pubblico di qualità dell'aria". il sole ore (in italian). retrieved - - . ^ castillo, michael del. "visa partners with ethereum digital-dollar startup that raised $ million". forbes. retrieved - - . ^ confidential, crypto. "stimulus checks from a crypto exchange; bitcoin rebound". forbes. retrieved - - . ^ dantes, damanick. "securitize to issue digital asset securities for yield funds". coindesk. v t e cryptocurrencies technology blockchain cryptocurrency tumbler cryptocurrency exchange cryptocurrency wallet cryptographic hash function decentralized exchange decentralized finance distributed ledger fork lightning network metamask non-fungible token smart contract consensus mechanisms proof of authority proof of personhood proof of space proof of stake proof of work proof of work currencies sha- -based bitcoin bitcoin cash counterparty lbry mazacoin namecoin peercoin titcoin ethash-based ethereum ethereum classic scrypt-based auroracoin bitconnect coinye dogecoin litecoin equihash-based bitcoin gold zcash randomx-based monero x -based dash petro other ambacoin firo iota primecoin verge vertcoin proof of stake currencies algorand cardano eos.io gridcoin nxt peercoin polkadot steem tezos tron erc- tokens augur aventus bancor basic attention token chainlink kin kodakcoin minds the dao uniswap stablecoins dai diem tether usd coin other currencies ankr chia filecoin hbar (hashgraph) nano neo ripple safemoon stellar whoppercoin related topics airdrop bitlicense blockchain game complementary currency crypto-anarchism cryptocurrency bubble cryptocurrency scams digital currency distributed ledger technology law double-spending hyperledger initial coin offering initial exchange offering initiative q list of cryptocurrencies token money virtual currency category commons list retrieved from "https://en.wikipedia.org/w/index.php?title=algorand_(cryptocurrency_platform)&oldid= " categories: blockchains alternative currencies cryptocurrencies hidden categories: cs italian-language sources (it) articles with short description short description matches wikidata navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages italiano edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement project muse - chucking the checklist: a contextual approach to teaching undergraduates web-site evaluation [skip to content] institutional login log in accessibility browse or search: menu advanced search browse mymuse account log in / sign up change my account user settings access via institution mymuse library search history view history purchase history mymuse alerts contact support portal: libraries and the academy chucking the checklist: a contextual approach to teaching undergraduates web-site evaluation marc meola portal: libraries and the academy johns hopkins university press volume , number , july pp. - . /pla. . article view citation related content additional information purchase/rental options available: buy issue for $ at jhup abstract this paper criticizes the checklist model approach (authority, accuracy, objectivity, currency, coverage) to teaching undergraduates how to evaluate web sites. the checklist model rests on faulty assumptions about the nature of information available through the web, mistaken beliefs about student evaluation skills, and an exaggerated sense of librarian expertise in evaluating information. the checklist model is difficult to implement in practice and encourages a mechanistic way of evaluating that is at odds with critical thinking. a contextual approach is offered as an alternative. a contextual approach uses three techniques: promoting peer- and editorially-reviewed resources, comparison, and corroboration. the contextual approach promotes library resources, teaches information literacy, and encourages reasoned judgments of information quality. collapse you are not currently authenticated. if you would like to authenticate using a different subscribed institution or have your own login and password to project muse authenticate purchase/rental options available: buy issue for $ at jhup recommend additional information issn - print issn - pages pp. - launched on muse - - open access no project muse mission project muse promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. forged from a partnership between a university press and a library, project muse is a trusted part of the academic and scholarly community it serves. about muse story publishers discovery partners advisory board journal subscribers book customers conferences what's on muse open access journals books muse in focus t.s. eliot prose resources news & announcements email sign-up promotional materials get alerts presentations information for publishers librarians individuals instructors contact contact us help policy & terms accessibility privacy policy terms of use north charles street baltimore, maryland, usa + ( ) - muse@jh.edu © project muse. produced by johns hopkins university press in collaboration with the sheridan libraries. now and always, the trusted content your research requires now and always, the trusted content your research requires built on the johns hopkins university campus built on the johns hopkins university campus © project muse. produced by johns hopkins university press in collaboration with the sheridan libraries. back to top this website uses cookies to ensure you get the best experience on our website. without cookies your experience may not be seamless. unicorn (finance) - wikipedia unicorn (finance) from wikipedia, the free encyclopedia jump to navigation jump to search startup company valued at over $ billion in business, a unicorn is a privately held startup company valued at over $ billion.[ ]: [ ] the term was coined in by venture capitalist aileen lee, choosing the mythical animal to represent the statistical rarity of such successful ventures.[ ][ ][ ][ ] decacorn is a word used for those companies over $ billion,[ ] while hectocorn is used for such a company valued over $ billion. according to cb insights, there are more than unicorns as of june  [update].[ ] the largest unicorns included bytedance, stripe and spacex. according to cb insights, there are now decacorns in the world, including stripe, spacex, and klarna.[ ] contents history reasons behind the rapid growth of unicorns . fast-growing strategy . company buyouts . increase of private capital available . prevent ipo . technological advancements valuation . valuation of high-growth companies . . market sizing . . estimation of finances . . working back to the present trends . sharing economy . e-commerce . innovative business model data . data as of march , see also references external links history[edit] when aileen lee originally coined the term "unicorn" in , there were only thirty-nine companies that were considered unicorns.[ ] in a different study done by the harvard business review, it was determined that startups founded between and were growing in valuation twice as fast as companies from startups founded between and .[ ] in , u.s. companies became unicorns, resulting in private companies worldwide valued at $ billion or more.[ ] reasons behind the rapid growth of unicorns[edit] fast-growing strategy[edit] according to academics in , investors and venture capital firms are adopting the get big fast (gbf) strategy for startups, also known as blitzscaling. gbf is a strategy where a startup tries to expand at a high rate through large funding rounds and price cutting to gain an advantage on market share and push away rival competitors as fast as possible.[ ] the rapid returns through this strategy seems to be attractive to all parties involved. however, there is always the cautionary note of the dot-com bubble of and the lack of long-term sustainability in value creation of the companies born from the internet age. company buyouts[edit] many unicorns were created through buyouts from large public companies. in a low-interest-rate and slow-growth environment, many companies like apple, facebook, and google focus on acquisitions instead of focusing on capital expenditures and development of internal investment projects.[ ] some large companies would rather bolster their businesses through buying out established technology and business models rather than creating it themselves. increase of private capital available[edit] the average age of a technology company before it goes public is years, as opposed to an average life of four years back in .[ ] this new dynamic stems from the increased amount of private capital available to unicorns and the passing of the u.s.'s jumpstart our business startups (jobs) act in , which increased by a factor of four the number of shareholders a company can have before it has to disclose its financials publicly. the amount of private capital invested in software companies has increased three-fold from to .[ ] prevent ipo[edit] through many funding rounds, companies do not need to go through an initial public offering (ipo) to obtain a capital or a higher valuation; they can just go back to their investors for more capital. ipos also run the risk of devaluation of a company if the public market thinks a company is worth less than its investors.[ ] a few recent examples of this situation were square, best known for its mobile payments and financial services business, and trivago, a popular german hotel search engine, both of which were priced below their initial offer prices by the market.[ ][ ] this was because of the severe over-valuation of both companies in the private market by investors and venture capital firms. the market did not agree with both companies' valuations, and therefore, dropped the price of each stock from their initial ipo range. investors and startups also do not want to deal with the hassle of going public because of increased regulations. regulations like the sarbanes–oxley act have implemented more stringent regulations following several bankruptcy cases in the u.s. market that many of these companies want to avoid.[ ] technological advancements[edit] startups are taking advantage of the flood of new technology of the last decade to obtain unicorn status. with the explosion of social media and access to millions utilizing this technology to gain massive economies of scale, startups have the ability to expand their business faster than ever.[ ] new innovations in technology including mobile smartphones, p p platforms, and cloud computing with the combination of social media applications has aided in the growth of unicorns. valuation[edit] the valuations that lead these start-up companies to become unicorns and decacorns are unique compared to more established companies. a valuation for an established company stems from past years' performances, while a start-up company's valuation is derived from its growth opportunities and its expected development in the long term for its potential market.[ ] valuations for unicorns usually come from funding rounds of large venture capital firms investing in these start-up companies. another significant final valuation of start-ups is when a much larger company buys out a unicorn and gives them that valuation. recent examples are when unilever bought dollar shave club[ ] and when facebook bought instagram[ ] for $ billion, effectively turning dollar shave club and instagram into unicorns. bill gurley, a partner at venture capital firm benchmark predicted in march and earlier that the rapid increase in the number of unicorns may "have moved into a world that is both speculative and unsustainable", that will leave in its wake what he terms "dead unicorns".[ ][ ][ ] also he said that the main reason of unicorns' valuation is the "excessive amount of money" available for them.[ ] similarly, in william danoff who manages the fidelity contrafund said unicorns might be "going to lose a bit of luster" due to their more frequent occurrence and several cases of their stock price being devalued.[ ] research by stanford professors published in suggests that unicorns are overvalued by an average of %.[ ][ ] valuation of high-growth companies[edit] for high-growth companies looking for the highest valuations possible, it comes down to potential and opportunity. when investors of high-growth companies are deciding on whether they should invest in a company or not, they look for signs of a home run to make exponential returns on their investment along with the right personality that fits the company.[ ] to give such high valuations in funding rounds, venture capital firms have to believe in the vision of both the entrepreneur and the company as a whole. they have to believe the company can evolve from its unstable, uncertain present standing into a company that can generate and sustain moderate growth in the future.[ ] market sizing[edit] to judge the potential future growth of a company, there needs to be an in-depth analysis of the target market.[ ] when a company or investor determines its market size, there are a few steps they need to consider to figure out how large the market really is:[ ] defining the sub-segment of the market (no company can target % market share, also known as monopolization) top-down market sizing[ ] bottom-up analysis[ ] competitor analysis after the market is reasonably estimated, a financial forecast can be made based on the size of the market and how much a company thinks it can grow in a certain time period. estimation of finances[edit] to properly judge the valuation of a company after the revenue forecast is completed, a forecast of the operating margin, analysis of needed capital investments, and return on invested capital needs to be completed to judge the growth and potential return to investors of a company.[ ] assumptions of where a company can grow to needs to be realistic, especially when trying to get venture capital firms to give the valuation a company wants. venture capitalists know the payout on their investment will not be realized for another five to ten years, and they want to make sure from the start that financial forecasts are realistic.[ ] working back to the present[edit] with the financial forecasts set, investors need to know what the company should be valued in the present day. this is where more established valuation methods become more relevant. this includes the three most common valuation methods:[ ] discounted cash flow analysis market comparable method comparable transactions investors can derive a final valuation from these methods and the amount of capital they offer for a percentage of equity within a company becomes the final valuation for a startup. competitor financials and past transactions also play an important part when providing a basis for valuing a startup and finding a correct valuation for these companies. trends[edit] sharing economy[edit] the sharing economy, also known as "collaborative consumption" or "on-demand economy", is based on the concept of sharing personal resources. this trend of sharing resources has made three of the top five largest unicorns (uber, didi, and airbnb) become the most valuable startups in the world. the economic trends of the s powered consumers to learn to be more conservative with spending and the sharing economy reflected this.[ ] e-commerce[edit] e-commerce and the innovation of the online marketplace have been slowly taking over the needs for physical locations of store brands. a prime example of this is the decline of malls within the united states, the sales of which declined from $ . billion in to $ . billion in .[ ] the emergence of e-commerce companies like amazon and alibaba (both unicorns before they went public) has decreased the need for physical locations to buy consumer goods. many large corporations have seen this trend for a while and have tried to adapt to the e-commerce trend. walmart in bought jet.com, an american e-commerce company, for $ . billion to try to adapt to consumer preferences.[ ] innovative business model[edit] in support of the sharing economy, unicorns and successful startups have built an operating model defined as "network orchestrators".[ ] in this business model, there is a network of peers creating value through interaction and sharing. network orchestrators may sell products/services, collaborate, share reviews, and build relations through their businesses. examples of network orchestrators include all sharing economy companies (i.e. uber, airbnb), companies that let consumers share information (i.e. tripadvisor, yelp), and peer-to-peer or business-to-person selling platforms (i.e. amazon, alibaba). data[edit] data as of march , [edit] number of unicorns: total combined valuation of unicorns: $ trillion total amount of capital raised: $ . billion number of new tech unicorns in : (down % yoy) total number of new unicorns in : [ ][ ] see also[edit] list of unicorn startup companies list of venture capital firms unicorn bubble valuation (finance) venture capital financing references[edit] ^ hirst, scott; kastiel, kobi ( - - ). "corporate governance by index exclusion". boston university law review. ( ): . ^ cristea, ioana a.; cahan, eli m.; ioannidis, john p. a. (april ). "stealth research: lack of peer‐reviewed evidence from healthcare unicorns". european journal of clinical investigation. ( ): e . doi: . /eci. . issn  - . pmid  . ^ rodriguez, salvador (september , ). "the real reason everyone calls billion-dollar startups 'unicorns'". international business times. ibt media inc. retrieved january , . ^ lee, aileen ( ). "welcome to the unicorn club: learning from billion-dollar startups". techcrunch. retrieved december . companies belong to what we call the 'unicorn club' (by our definition, u.s.-based software companies started since and valued at over $ billion by public or private market investors)... about . percent of venture-backed consumer and enterprise software startups ^ griffith, erin & primack, dan ( ). "the age of unicorns". fortune. retrieved december . subtitle: the billion-dollar tech startup was supposed to be the stuff of myth. now they seem to be... everywhere. ^ chohan, usman ( ). "it's hard to hate a unicorn, until it gores you". the conversation. retrieved october . ^ roberts, daniel & nusca, andrew ( ). "the unicorn list". fortune. retrieved december . ^ a b c "the global unicorn club". cb insights. retrieved - - . ^ fan, jennifer s. "regulating unicorns: disclosure and the new private economy." bcl rev. ( ): . ^ "how unicorns grow". harvard business review. retrieved - - . ^ levi sumagaysay (october , ). "venture capital: bay area's lucid motors, zoox, uber scored the most in third quarter". mercury news. retrieved october , . ^ sterman, j. d., henderson, r., beinhocker, e. d., & newman, l. i. ( ). getting big too fast: strategic dynamics with increasing returns and bounded rationality. management science, ( ), - . ^ a b c howe, neil. "what's feeding the growth of the billion-dollar 'unicorn' startups?". forbes. retrieved - - . ^ "to fly, to fall, to fly again". the economist. - - . issn  - . retrieved - - . ^ a b "grow fast or die slow: why unicorns are staying private". mckinsey & company. retrieved - - . ^ demos, telis; driebusch, corrie ( - - ). "square's $ -a-share price deals blow to ipo market". wall street journal. issn  - . retrieved - - . ^ balakrishnan, anita ( - - ). "trivago ipo opens at $ . after pricing at $ , below its expected range". cnbc. retrieved - - . ^ a b c d "valuing high-tech companies". mckinsey & company. retrieved - - . ^ "unilever buys dollar shave club for $ billion". fortune. retrieved - - . ^ raice, shayndi; ante, spencer e. ( - - ). "insta-rich: $ billion for instagram". wall street journal. issn  - . retrieved - - . ^ winkler, rolfe ( ). "bill gurley sees silicon valley on a dangerous path". the wall street journal. retrieved december . subtitle: subtitle: venture capitalist says companies hurt themselves by trying to delay going public ^ blodget, henry ( ). "tech: how to survive great depression . without firing everyone". business insider. retrieved december . it seems every serious venture capital firm has now had a chat with its portfolio companies about how it[']s time to fire people... vc-extraordinaire bill gurley's benchmark has had the same chat with its companies, but bill tells pehub that there's actually an alternative to canning half your company: move to san jose ^ griffith, erin ( ). "bill gurley predicts 'dead unicorns' in startup-land this year". fortune. retrieved december . subtitle: a crash would affect more than just startups. ... bill gurley, the prominent investor behind uber and snapchat, has been sounding the tech bubble alarm for months now. he's preached about the dangerous appetite for risk in the market, the alarmingly high burn rates and the excess of capital sloshing around in silicon valley. “there is no fear in silicon valley right now,” he said. “a complete absence of fear.” he added that more people are employed by money-losing companies in silicon valley than ever before. will there be a crash? “i do think you’ll see some dead unicorns this year,” he said, using the term used to describe startups with valuations higher than $ billion. ^ rob price ( ). "legendary investor bill gurley says that there's a 'systematic problem in silicon valley' because it's too easy to get cash". business insider. retrieved march . there's so much easy money in the tech industry, entrepreneurs can afford not to be accountable to their investors. that "excessive amount of money," he says, can inflate a startup's valuation — even if they don't deserve it. ^ reuters ( december ). fidelity star danoff grows cautious about unicorn phenomenon, cnbc.com, accessed jan ^ gornall and strebulaev ( ). "squaring venture capital valuations with reality". stanford university working paper. ssrn  . we develop a valuation model for venture capital-backed companies and apply it to u.s. unicorns -- private companies with reported valuations above $ billion. we value unicorns using financial terms from legal filings and find reported unicorn post-money valuations average % above fair value, with being more than % above. ^ sorkin, andrew ( ). "how valuable is a unicorn? maybe not as much as it claims to be". new york times. retrieved march . the average unicorn is worth half the headline price tag that is put out after each new valuation. ^ a b macmillan, i. c., siegel, r., & narasimha, p. s. ( ). criteria used by venture capitalists to evaluate new venture proposals. journal of business venturing, ( ), - . ^ zhuo, tx ( - - ). " strategies to effectively determine your market size". entrepreneur. retrieved - - . ^ a b "market sizing: is there a market size formula? | b b international". b b international. retrieved - - . ^ "valuation methods | venture valuation". www.venturevaluation.com. retrieved - - . ^ newlands, murray (july , ). "the sharing economy: why it works and how to join". forbes. retrieved march , . ^ ho, ky trang. "how to profit from the death of malls in america". forbes. retrieved - - . ^ nassauer, sarah ( - - ). "wal-mart to acquire jet.com for $ . billion in cash, stock". wall street journal. issn  - . retrieved - - . ^ "rise of the unicorns". zinnov thoughts. - - . retrieved - - . ^ lunden, ingrid. "cb insights: , tech exits in , 'unicorn births' down %". techcrunch. retrieved - - . external links[edit] "the complete list of unicorn companies". cb insights. retrieved from "https://en.wikipedia.org/w/index.php?title=unicorn_(finance)&oldid= " categories: valuation (finance) neologisms s neologisms hidden categories: articles with short description short description matches wikidata articles containing potentially dated statements from june all articles containing potentially dated statements navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages afrikaans العربية deutsch español فارسی français 한국어 bahasa indonesia עברית lietuvių bahasa melayu nederlands 日本語 polski português svenska türkçe Українська tiếng việt 中文 edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement the professor's house - wikipedia the professor's house from wikipedia, the free encyclopedia jump to navigation jump to search the professor's house first edition author willa cather cover artist c. b. falls country united states language english publisher alfred a. knopf publication date media type print (hardcover, paperback) the professor's house is a novel by american novelist willa cather. published in , the novel was written over the course of several years. cather first wrote the centerpiece, “tom outland's story,” and then later wrote the two framing chapters “the family” and “the professor.”[ ] contents plot summary characters major themes . nationalism critical trends and reception . form . queer readings references external links plot summary[edit] this section does not cite any sources. please help improve this section by adding citations to reliable sources. unsourced material may be challenged and removed. (march ) (learn how and when to remove this template message) when professor godfrey st. peter and wife move to a new house, he becomes uncomfortable with the route his life is taking. he keeps on his dusty study in the old house in an attempt to hang on to his old life. the marriages of his two daughters have removed them from the home and added two new sons-in-law, precipitating a mid-life crisis that leaves the professor feeling as though he has lost the will to live because he has nothing to look forward to. the novel initially addresses the professor's interactions with his new sons-in-law and his family, while continually alluding to the pain they all feel over the death of tom outland in the great war. outland was not only the professor's student and friend, but the fiancé of his elder daughter, who is now living off the wealth created by the "outland vacuum." the novel's central section turns to outland, and recounts in first-person the story of his exploration of an ancient cliff city in new mexico. the section is a retrospective narrative remembered by the professor. in the final section, the professor, left alone while his family takes an expensive european tour, narrowly escapes death due to a gas leak in his study; and finds himself strangely willing to die. he is rescued by the old family seamstress, augusta, who has been his staunch friend throughout. he resolves to go on with his life. characters[edit] godfrey st. peter - also known as the professor, the novel's protagonist. he is a fifty-two-year-old man of mixed descent “canadian french on one side, and american farmers on the other”.[ ] he is described by his wife as growing “better-looking and more intolerant all the time”.[ ] he is a professor of history at hamilton university and his book is entitled spanish adventures in north america. godfrey's name comes from godfrey of boulogne, the conqueror who took jerusalem:[ ] st. peter is the rock on which the roman church was built: st. peter is writing about pioneers, when he himself is an intellectual pioneer and every bit of his name comes from famous pioneers in history. lillian st. peter - the professor's status-oriented wife. she is described as “occupied with the future” and adaptable.[ ] most of her involvement in the novel is to act as contrast to the professor and show the distance between his interests and his family. their relationship is described as happy but dependent on her inheritance. she tells the professor “'one must go on living, godfrey. but it wasn't the children who came between us.' there was something lonely and forgiving in her voice, something that spoke of an old wound, healed and hardened and hopeless”.[ ] augusta - the family seamstress and friend of st. peter. she is described as being “a reliable, methodical spinster, a german catholic and very devout”.[ ] rosamond - st. peter's elder daughter and wife of louie marsellus. she was originally engaged to tom outland and he left everything to her when he died. she is now obsessed with her appearance and having all the finest things, likely because louie showers her with extravagance. the professor admits that “he didn't in the least understand” her.[ ] kathleen - st. peter's younger daughter and wife of scott mcgregor. she is sweet and honest and is one of the more genuine characters in the novel. the professor says that “the only unusual thing about kitty … is that she doesn't think herself a bit unusual” and that she “has a spark of something different”.[ ] louie marsellus - rosamond's husband and executor of tom outland's patents from which he massed a fortune and is now building a house and memorial to tom where he and rosamond will live called outland. he is generous and very loving to rosamond. the professor says that he is “perfectly consistent. he's a great deal more generous and public-spirited than i am, and my preferences would be enigmatical to him”.[ ] nevertheless marsellus is named after the french monarch, and the roman general who fought hannibal, and the last part of his name, 'sellus' corroborates (mainly) scott's idea that louie is only interested in materialism. scott mcgregor - kathleen's husband. they became engaged soon after rosamond's engagement to louie. he is a journalist and writes a daily prose poem for the two to live off of. the professor describes him as “having a usual sort of mind” but that “he trusted him”.[ ] scott and kathleen are portrayed as truly loving each other. tom outland - once st. peter's student and rosamond's fiancé before his death, the story focuses on his memory. the central piece of the chapter “tom outland's story” is tom's own account of his adventures in the american southwest investigating a cliff city's remains in the desert while working as a rancher. it is through these stories and his goodness that the st. peter family fell in love with him and remembers him fondly. major themes[edit] the novel explores many contrasting ideas. indeed in many respects, the novel deals in opposites, variously conceived: marsellus vs. outland, kitty vs. rosamond, the quixotic vs. the pragmatic, the old vs. the new, the idea of the professor as a scholar vs. his family relations, indian tribes vs. the contemporary world of the s, and the opposing social poles of the professor vs. lillian. those opposites are not always clear-cut. considering the ending, the novel can be viewed as devoid of a clear moral imperative. similarly, the comparisons between the modern world of sections iii and i contrast with tom outland’s natural world in section ii. yet the confused judgments of the characters block these comparisons and obscure clear morals: tom’s both elevates and appropriates nature, and the unsupported conclusions of father duchene pervert the true historical facts of the mesa culture. he assumes ‘mother eve’ was murdered for infidelity to her husband, but this would sharply contrast tom’s view of the mesa as an idyllic space away from ‘the dirty devices of the world’.[ ] accordingly the professor’s house is generally analyzed as a critique of modernity—the marselluses are consumed by the latest fashions, mrs. st. peter transfers her old love for her husband to a passion for her sons-in-law, science and the modern world corrupt st. peter’s ideals of history and nature. yet it is a failure to embrace modernity that nearly kills the professor and brings him to the realization of his need for change. in his own speech, the knowledgeable professor puts forth numerous contradictions. he criticizes science for only making humans comfortable in front of his lecture hall students, yet with his daughter he lauds the promise of what science can do for man (crane), and its superior value to money: “in hamilton the correspondence between inner and outer has been completely destroyed: the dress-forms are deceptive; rosamond's physical beauty clothes a spiritual emptiness; louie's loud exterior covers an inner capacity for love and generosity. in hamilton the failure of inner and outer to cohere leads to misunderstandings and to the characters' inability to make meaningful contact with one another".[ ] nationalism[edit] the professor's house was written in , in post-war america. in a similar fashion to f. scott fitzgerald's the great gatsby, cather narrates a story about the moral decline of a money-driven society. tom displays an emersonian understanding of national identity. his sense of americanness is connected to the land and its beauty, and he believes in a collective possession of this land and all of its history for all americans. his anger at roddy’s sale of the native american artifacts to the german stems from a belief that they were a piece of american history, that they were of the land, and therefore nobody had the right to sell them, much less to a non-american. tom’s ultimate experience of connection to an american identity comes during his night on the mesa alone after his confrontation with roddy, when he discovers “the mesa was no longer an adventure, but a religious emotion” and, “it was possession”.[ ] louie’s sense of national identity in contrast centers on money and the economic greatness of the country. he spends liberally the income derived from tom’s advances in engineering. louie wears the source of his wealth proudly—the fact that his livelihood is derived from his wife’s deceased fiancé does not create tension between husband and wife nor between the couple and society. his announcement, “we have named our place for tom outland, a brilliant young american scientist and inventor, who was killed in flanders, fighting with the foreign legion, the second year of the war, when he was barely thirty years of age,” displays his pride in and respect for his benefactor, and his recognition that tom’s loyalty to the nation has brought louie the monetary success he enjoys is representative of his understanding that america's economic success now takes precedence in defining the country and its people.[ ] the professor is caught between the worldviews of tom outland and louie marsellus. he is resistant to change, idealistically holding onto tom’s memory and an emersonian ideality that impugnes material acquisitiveness. as outland’s good friend and mentor, st. peter feels it is his responsibility to make sure tom’s will is properly executed. in this endeavor, he is torn between his love for tom and his love for his daughter rosamond, both of whom, the professor believes, have different views on how the money should be spent. when mrs. crane asks for his help in obtaining compensation for her husband for the patent on which he worked intimately with outland, the professor says, “heaven knows i’d like to see crane get something out of it, but how? how? i’ve thought a great deal about this matter, and i’ve blamed tom for making that kind of will”.[ ] on the one hand, he is digging his heels into the ground, resisting the shift from a love of the land to a love of its fruits, but he also has a sense of obligation which makes it difficult for him to ignore the role money, particularly tom’s money, plays in his relationships and social life. cather’s endorsement of one worldview over another is debatable, as has been demonstrated by various critics. walter benn michaels suggests that cather sides with tom outland, in that the poetry of “the ‘picture’ of the cliff-dwellers’ tower, ‘rising strong, with calmness and courage’…marks in cather the emergence of culture not only as an aspect of american identity but as one of its determinants”.[ ] from this perspective, outland is cather’s voice in the novel, advocating the close ties to the landscape as an expression of national identity. contrarily, sarah wilson posits that cather is instead critical of tom’s nostalgia. “the cliff dwellings of the blue mesa once belonged to a now vanished culture, and no living native american population has an indisputable claim on them…how, the novel queries, can a nation or individuals engage the history of a culturally and temporally other people?” however, wilson does concede “the america of which tom outland speaks, the nation that treasures its ancient southwestern heritage, at least allows for unique ways of being american”.[ ] critical trends and reception[edit] while critically neglected for the better part of the th century, interest in willa cather was aroused in the s with the rise of the feminist movement. although many of her novels have been subsequently incorporated into the canon, critics have largely ignored the professor’s house, passing it over as “morally and psychologically unachieved”.[ ] as a reason for this disparagement, critics often cite the “broken” format of the book, rebuking its structure as unnecessary. or, they cite the ambivalent depiction of the professor’s psyche. the reader is unsure of how to consider the professor’s demands for solitude and his entrapment in the past. he’s a family man and a university man, but the professor’s conflict reaches its crux when he surrenders "local community for the nostalgic national ideal".[ ] a.s. byatt calls the professor's house cather's "masterpiece... almost perfectly constructed, peculiarly moving, and completely original".[ ] form[edit] the professor’s house has been criticized as “fragmentary and inconclusive” because of the way the middle section, “tom outland’s story,” fractures the surrounding narrative. j. schroeter presents the most common critical view regarding the structural meaning of the novel in his essay “willa cather and the professor’s house”: "book ii is the 'turquoise' and books i and iii are the 'dull silver'. the whole novel, in other words, is constructed like the indian bracelet. it is not hard to see that willa cather wants to draw an ironic contrast not only between two pieces of jewelry but between two civilizations, between two epochs, and between two men, marcellus [sic] and outland, who symbolize these differences".[ ] some critics, however, have analyzed the novel’s structure in light of the sonata—equating the novel with either a complete, three-movement sonata, or a single sonata, broken up into exposition, development and recapitulation.[ ] other critics, such as sarah wilson, cite the dutch painting style, which cather references in her correspondence, as a way of explaining the novel’s theme and layout. dutch paintings provide a sense of the context beyond the actual objects presented. they consist of crowded interiors and, in cather’s words, “a square window, open…the feeling of the sea that one got through those square windows was remarkable, and gave me a sense of the fleets of dutch ships that ply quietly on all the waters of the globe—to java, etc.’’ applied to the professor’s house, books i and iii serve as the overstuffed dutch interior, while “tom outland’s story, with its more open setting and voice, functions as the open window.[ ] queer readings[edit] in recent years a queer reading of the professor’s house has emerged. this reading centers on the professor’s relationship with tom, as well as tom’s relationship with his idolized friend roddy. through tom’s youthful influence, the professor achieves a sort of procreation—his work comes forth more easily and fluidly. “tom represents the professor’s need to live with delight.” for the professor, tom’s loss also represents the professor’s forgoing of homoerotic desire and along with it, a life “without delight... without joy, without passionate grief”.[ ] tom and roddy share a deeply intimate experience of discovery. tom views roddy’s selling of the find as a betrayal, and they experience a split with characteristics of a romantic rift.[ ][ ] references[edit] ^ thacker, robert. willa cather: leading american author - archived - - at the wayback machine. . december , . ^ cather, willa. the professor's house. vintage classics: new york, . . ^ cather, . ^ a b byatt, a. s. (december , ). "american pastoral". the guardian. retrieved may , . ^ a b cather, . ^ cather, . ^ cather, . ^ a b cather, . ^ cather, . ^ a b c hart, clive. "the professor's house": a shapely story". the modern language review, vol. , no. (april, ), pp. - . ^ cather, . ^ cather, . ^ cather, . ^ michaels, walter benn. "the vanishing american." american literary history . ( ): - . web. dec . ^ wilson, sarah. "fragmentary and inconclusive violence: national history and literary form in the professor’s house". american literature, vol. , no. (september ): pp. - . ^ wilson, . ^ schroeter, j., 'willa gather and the professor's house', yale review, , , - . ^ giannone, richard, music in willa cather’s fiction, university of nebraska press, ^ wilson, . ^ cather, ^ anders, john p., willa cather’s sexual aesthetics and the male homosexual literary tradition (lincoln: university of nebraska press, ) ^ wilson, “canonical relations: willa cather, america, and the professor’s house”, texas studies in literature and language, vol. , no. , spring , university of texas press. external links[edit] the professor's house at the internet archive v t e willa cather novels alexander's bridge o pioneers! the song of the lark my Ántonia one of ours a lost lady the professor's house my mortal enemy death comes for the archbishop shadows on the rock lucy gayheart sapphira and the slave girl hard punishments short stories "peter" "lou, the prophet" "the elopement of allen poole" "a tale of the white pyramid" "a son of the celestial" "the clemency of the court" "the fear that walks by noonday" "on the divide" "a night at greenway court" "tommy, the unsentimental" "the princess baladina – her adventure" "the count of crow's nest" "the burglar's christmas" "the strategy of the were-wolf dog" "a resurrection" "the prodigies" "nanette: an aside" "the way of the world" "the westbound train" "eric hermannson's soul" "the dance at chevalier's" "the sentimentality of william tavener" "the affair at grover station" "a singer's romance" "the conversion of sum loo" "jack-a-boy" "el dorado: a kansas recessional" "the professor's commencement" "the treasure of far island" "a death in the desert" "a wagner matinee" "the sculptor's funeral" "flavia and her artists" "the garden lodge" "the marriage of phaedra" "paul's case" "the namesake" "the profile" "the willing muse" "eleanor's house" "on the gulls' road" "the enchanted bluff" "the joy of nelly deane" "behind the singer tower" "the bohemian girl" "consequences" "the bookkeeper's wife" "the diamond mine" "a gold slipper" "ardessa" "scandal" "her boss" "coming, eden bower!" "uncle valentine" "double birthday" "neighbour rosicky" "two friends" "the old beauty" "before breakfast" "the best years" short story collections the troll garden youth and the bright medusa obscure destinies the old beauty and others five stories collection of poems april twilights collaborations the life of mary baker g. eddy and the history of christian science adaptations a lost lady ( film) o pioneers! ( film) my antonia ( film) the song of the lark ( film) o pioneers! ( opera) related willa cather birthplace willa cather house edith lewis willa cather foundation willa literary award authority control general integrated authority file (germany) viaf worldcat (via viaf) national libraries france (data) retrieved from "https://en.wikipedia.org/w/index.php?title=the_professor% s_house&oldid= " categories: novels by willa cather american novels campus novels alfred a. knopf books novels set in new mexico hidden categories: webarchive template wayback links articles needing additional references from march all articles needing additional references articles with internet archive links wikipedia articles with gnd identifiers wikipedia articles with viaf identifiers wikipedia articles with bnf identifiers wikipedia articles with worldcat-viaf identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages Русский edit links this page was last edited on january , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement kaseya case update | divd csirt skip to the content. home / blog / kaseya case update divd csirt making the internet safer through coordinated vulnerability disclosure menu home divd csirt cases divd- - telegram group shares stolen credentials.... divd- - divd recommends not exposing the on-premise kaseya unitrends servers to the... divd- - botnet stolen credentials... divd- - multiple vulnerabilities discovered in kaseya vsa.... divd- - a preauth rce vulnerability has been found in vcenter server... divd- - a preauth rce vulnerability has been found in pulse connect secure... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - kaseya recommends disabling the on-premise kaseya vsa servers immediately.... divd- - on-prem exchange servers targeted with -day exploits... divd- - solarwinds orion api authentication bypass... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - a list of vulnerable fortinet devices leaked online... divd- - four critical vulnerabilities in vembu bdr... divd- - wordpress plugin wpdiscuz has a vulnerability that alllows attackers to tak... divd- - data dumped from compromised pulse secure vpn enterprise servers.... divd- - .nl domains running wordpress scanned.... divd- - citrix sharefile storage zones controller multiple security updates... divd- - smbv server compression transform header memory corruption... divd- - apache tomcat ajp file read/inclusion vulnerability... divd- - list of mirai botnet victims published with credentials... divd- - exploits available for ms rdp gateway bluegate... divd- - wildcard certificates citrix adc... divd- - citrix adc... cves cve- - - authenticated xml external entity vulnerability in kaseya vs... cve- - - authenticated local file inclusion in kaseya vsa < v . . ... cve- - - fa bypass in kaseya vsa cve- - - authenticated authenticated reflective xss in kaseya vsa cve- - - unautheticated rce in kaseya vsa < v . . ... cve- - - autheticated sql injection in kaseya vsa < v . . ... cve- - - unautheticated credential leak and business logic flaw in ka... cve- - - unauthenticated server side request forgery in vembu product... cve- - - unauthenticated arbitrary file upload and command execution ... cve- - - unauthenticated remote command execution in vembu products... blog - - : kaseya vsa limited disclosure... - - : kaseya case update ... - - : kaseya case update ... - - : kaseya case update... - - : kaseya vsa advisory... - - : vcenter server preauth rce... - - : warehouse botnet... - - : closing proxylogon case / case proxylogon gesloten... - - : vembu zero days... - - : pulse secure preauth rce... more... donate rss contact kaseya case update jul - victor gevers english below during the last hours, the number of kaseya vsa instances that are reachable from the internet has dropped from over . to less than in our last scan today. and, by working closely with our trusted partners and national certs, the number of servers in the netherlands has dropped to zero. a good demonstration of how a cooperative network of security-minded organizations can be very effective during a nasty crisis. by now, it is time to be a bit more clear on our role in this incident. first things first, yes, wietse boonstra, a divd researcher, has previously identified a number of the zero-day vulnerabilities [cve- - ] which are currently being used in the ransomware attacks. and yes, we have reported these vulnerabilities to kaseya under responsible disclosure guidelines (aka coordinated vulnerability disclosure). our research into these vulnerabilities is part of a larger project in which we investigate vulnerabilities in tools for system administration, specifically the administrative interfaces of these applications. these are products like vembu bdr, pulse vpn, fortinet vpn, to name a few. we are focusing on these types of products because we spotted a trend where more and more of the products that are used to keep networks safe and secure are showing structural weaknesses. after this crisis, there will be the question of who is to blame. from our side, we would like to mention kaseya has been very cooperative. once kaseya was aware of our reported vulnerabilities, we have been in constant contact and cooperation with them. when items in our report were unclear, they asked the right questions. also, partial patches were shared with us to validate their effectiveness. during the entire process, kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get this issue fixed and their customers patched. they showed a genuine commitment to do the right thing. unfortunately, we were beaten by revil in the final sprint, as they could exploit the vulnerabilities before customers could even patch. after the first reports of ransomware occurred, we kept working with kaseya, giving our input on what happened and helping them cope with it. this included giving them lists of ip addresses and customer ids of customers that had not responded yet, which they promptly contacted by phone. so, in summary: divd has been in a coordinated vulnerability disclosure process with kaseya, who was working on a patch. some of these vulnerabilities were used in this attack. kaseya and divd collaborated to limit the damage wherever possible. as more details become available, we will report them on our blog and case file. updated statistics:  twitter  facebook  linkedin eleanor youmans: synopses and cover art eleanor youmans ohio author eleanor youmans published a dozen children's novels with bobbs-merrill in the s, s, and s. she also wrote short stories and poetry. this blog is dedicated to preserving her life and work. pages home biography articles synopses and cover art book reviews and display ads synopses and cover art novels skitter cat.  indianapolis: bobbs-merrill, . pages. illustrated by ruth bennett; dedicated to w.c.y. [william clinton youmans, her son]. skitter cat is the story of a white persian kitten who comes to live with mother, father, and little boy—and their airedale, major.  the story takes place in a small town in ohio.  skitter gets his name from the sound his claws make skittering across the floor, like a dry leaf.  his birthday is march , and the story takes place over the course of the first year of his life.  he is a feisty kitten who likes to go after the neighbor’s chickens.  in the fall, mother sends skitter to live with a friend for a few weeks, until it is too cold for the neighbor to have chickens outside any longer.  skitter, however, escapes from the car taking him to the city, and lives outside for five months.  his adventures include encounters with a fox, raccoon, opossum, owl, and skunk, and he survives by sleeping in a hollowed out tree and catching mice, rabbits, and peasants.  he also spends a brief time at the city dump, where he is sighted.  father builds a trap and they capture skitter and bring him home, where he’s older and more dignified now than he had been during his younger, kitten days.  skitter cat and little boy. indianapolis: bobbs-merrill, . pages. illustrated by ruth bennett; dedicated to my dear son william clinton youmans. late in the summer, father receives an invitation to spend a month camping with a cousin who is spending the summer in the rocky mountains.  father, mother, and little boy decide to go, taking skitter cat with them, and leaving major to stay with aunt maud.  the family travels by train through chicago to the rockies.  once at the encampment, there’s much to do, from trout fishing and shooting a popgun at gophers, to visiting waterfalls and going horseback riding.  there are also plentiful huckleberries, to later be made into pies and jams.  the family visits a trapper, tim nolan, and his wife.  the nolan’s airedales attack a porcupine, and the family must pull out the painful quills from the tender mouths and noses of the dogs.  after skitter has many exciting encounters—with a mountain rat, a coyote, and even a couple of bears—the family returns home to a very happy major, who has missed them.  the neighborhood children have newfound respect for skitter when they learn of his adventures with the bears. skitter cat and major. indianapolis: bobbs-merrill, . pages. illustrated by ruth bennett; dedicated to william. skitter and skeet. indianapolis: bobbs-merrill, .  pages. illustrated by ruth bennett; dedicated to willem and helen [her son and his wife]. teddy horse: the story of a runaway pony.  indianapolis: bobbs-merrill, .  pages. illustrated by ruth king; dedicated to herbert b. haskins. teddy horse opens with the birth of a tiny shetland pony, so-named teddy horse because his brown coat resembles the teddy bear that once belonged to his owner’s son bill.  teddy horse is not to stay for long with his mother susie, their human family, or the other pets on the farm, however; once his sleek black coat grows in, he is to be trained as a show pony on the farm next door.  even so, his new home is anything but lonely, as teddy horse can still see his mother susie, bill and all his friends, nippy the bull terrier, and buffin the brown tabby cat.  his trainer is friendly, patient, and encourages children to praise teddy horse with applause and sweets.  the pony show—which includes performances by calico pony spot, brown mule dynamite, tiny monkey jimmy, and white collie snow—goes on circuit tour, leisurely making its way south, then west.  on the plains, teddy horse is accidentally thrown from a truck, where he learns to fend for himself for several months.  he escapes a pack of coyotes, joins a herd of wild mustangs, befriends a wayward buckskin mare and her colt, and finally makes his way to his home town when he, the buckskin, and the mustangs are rounded up cowboys and sold to a polo horse breeder in the east.  mistaken only momentarily for a colt, the tiny shetland is of little use to the breeder, and is hardly missed when he makes an unlikely escape by applying his learned pony show antics to make his way toward a familiar looking barn.  buffin and nippy are the first to greet their long lost companion, followed by a very shocked bill and his father.  it seems that teddy horse traveled over a thousand miles to make it back home.  his coat, mane, and tail are promptly groomed, and teddy horse happily resumes his role in the pony show, safe and sound. teddy horse: the story of a runaway pony.  london: elkin mathews and marrot, . pages. illustrated by ruth king; dedicated to herbert b. haskins. the london-based elkin mathews and marrot edition of teddy horse is virtually identical to its american, bobbs-merrill predecessor except for smaller font and page sizes, resulting in differences in pagination and text placement, and—more significantly—choice of vernacular.  youmans told the newark advocate that “when ‘teddy horse’ went on british book shelves….the editors did some blue pencilling and changed certain words, which they referred to as ‘americanisms.’  examples of the editing: ‘sure’ to ‘certainly,’ ‘curb’ to ‘kerb,’ ‘burned gasoline’ to ‘petrol fumes,’ ‘gee, he’s cute,’ to ‘isn’t he topping?’ ‘cunning little colt,’ to ‘jolly little colt.’” cinder: the tale of a black and tan toy terrier.  indianapolis: bobbs-merrill, . pages. illustrated by f. bernard shields; dedicated to richard a. youmans [her grandson]. cinder is a black and tan toy terrier pup, noted for her white spots on one toe of each foot.  living with her playwright owner, mr. george, she has many adventures, including a ride down a flooded river, a bout with a mud turtle, and even a role in a stage play.  cinder runs off a few times at the beginning of the story, during which time she is cared for by a little girl named natalie, and she often wanders into the yard next door belonging to another young girl, ida, and her black cat, inky, before returning home to mr. george and cinder’s mother, queenie.  at one point, cinder becomes momentarily jealous of her newborn puppy brother, budge.  the central adventure of the novel, however, is cinder’s performance in a theater production authored by mr. george.  cinder becomes depressed when the theater group begins a tour that requires her to travel without her master.  the starring actress, apparently famous, takes to cinder and helps her feel more comfortable in mr. george’s absence.  by the end of the story, mr. green and the actress are married, and cinder makes an unexpected grand entrance into the ceremony. little dog mack: the story of a wirehaired terrier.  indianapolis: bobbs-merrill, . pages. illustrated by van trenck; dedicated to e.h.w. nine-year-old twins ralph and rachel receive mack, a three-month-old wire-haired fox terrier for christmas.  the little white dog with black and tan spots loves to chase other dogs.  one day, he is lured away from home by a dishonest kennel owner who uses his dog, mitzi, for bait.  mack is unhappy at the kennel, where he is deprived of human companionship and made to eat food he dislikes.  he fathers four pups with mitzi, and while she is busy with her infants, the son of the kennel owner takes mack along in the car as a guard dog.  there is an accident, which enables mack’s escape.  he visits a farm where he saves young johnny from a bull.  johnny is allowed to keep mack only until his owner is found.  johnny suspects that mack doesn’t rightfully belong to the kennel and helps the little dog escape again.  on his long trek home, mack meets other stray dogs and steals a ham bone.  finally, he reaches his real home, after traveling about miles.  ralph and rachel are ecstatic at his return, and the family cat, jet, learns to adore him.  waif: the story of spe. indianapolis: bobbs-merrill, . pages. illustrated by will rannells.  one christmas eve, a woman spots waif—a mongrel puppy, part spritz, perhaps part pekinese—in the window of a pet shop in columbus, ohio.  she decides to purchase the pup as a gift for her brother and sister-in-law.  as it turns out, the couple does not want the dog.  rather than taking their new charge to the humane society, they simply drop her on the ohio state university campus, next to mirror lake.  despite the couple’s heartless disposal of the defenseless puppy, young waif enjoys her freedom, fending for herself on campus and its surrounding alleyways.  when school is back in session, she begs for scraps at a nearby fraternity house.  the sigma phi epsilon brothers adopt the dog as their mascot, calling her “spe,” an acronym of their greek letters.  spe is an obedient pet, following the various fraternity members to their classes, where she quietly sits through lectures.  she also attends football games, marches on the field with the band, and helps raise money with the charity newsies, all the while becoming quite popular with ohio state students.  she spends a few nights in the local animal shelter when two of the frat brothers who dislike dogs try to get rid of her.  one night after returning from the shelter, however, spe wins everyone over when she alerts the house members to a burglar.  during school breaks, spe stays with a university art professor—modeled after will rannells—who specializes in dog portraits and owns several dogs and cats.  the novel closes with a dinner for dogs at the local human society shelter—with spe as guest of honor, of course—organized by rosemary and raymond, the two children who live next to the sigep house, and who greatly admire spe. the great adventures of jack, jock and funny.  indianapolis: bobbs-merrill, . pages. illustrated by will rannells; dedicated to charles l. hirsch, who owns funny. twelve-year-old twins, charles and chester gray, open a kennel on the family’s farm after a stray dog shows up at school.  the all-white collie, jack, is joined by several other boarders over the summer, including a scottish terrier named jock, two pointers named rusty and sam, and a prissy cocker spaniel named sallie.  charles’s and chester’s six-year-old sister, annabel—who already has a pet rooster named demosthenes—adds thistle, a stray kitten, to the growing menagerie.  as the children tend to their kennel, they anticipate their grandfather’s arrival any day.  grandpa gray, retiring from his jewelry store business, is coming to live with his son’s family on the farm.  shortly before venturing to the farm, he acquires funny, a scruffy poodle who was a former performer in a dog act.  grandpa gray’s train wrecks on the way into town, and funny runs away.  the twins find the small dog, as the wreck is not far from the farm, but their grandfather develops amnesia and wanders off.  the family is puzzled as to grandpa gray’s whereabouts, but suspect he might be visiting his brother.  meanwhile, jock and funny are stolen, but jack rescues them.  annabel takes funny with her to pick blackberries, and the dog leads her to the front porch of the family’s nearest neighbor—an old, unfriendly hermit.  on the porch sits grandpa gray.  not realizing the grandfather’s identity, the neighbor has been taking care of him.  once annabel recognizes her grandfather and explains to him that he was in a train accident, grandpa gray’s memory returns and he is united with the family on the farm. the forest road: two boys in the ozarks.  indianapolis: bobbs-merrill, . pages. illustrated by alma wentzel froderstrom; dedicated to emerson evans [her cousin, the nephew of her welsh grandmother, eleanor evans]. on a vacation / business venture, eleven-year-old robert brown and his father turn down the wrong country road and the trailer they are pulling with their car gets stuck in the mud.  francis smith—a county boy the same age as robert—happens across the scene, riding his horse, ben, on the way home from school.  robert’s cat, whiskers, runs up a tree, and francis—with the aid of ben—rescues him.  being the same age, the two boys become fast friends, and robert enjoys romping the countryside and resident swap with francis—and francis’s beagle, sounder.  francis doesn’t seem to like his father, japp.  as it turns out, japp is francis’s stepfather, as his real father, mr. lynn, passed away a year before.  his mother is also not his biological parent, but his father’s second wife, his real mother having died when francis was an infant.  japp doesn’t think francis should spend time playing music, and takes away his violin.  he also insists that francis use his last name and call him dad, prompting francis to plan to run away.  the boys soon learn, however, that francis is actually robert’s cousin, as robert’s father is the brother of francis’s real mother.  in fact, the purpose of the trip to the country is two-fold; mr. brown is there to collect plant specimens from the swamp for the government, but he is also there to see how his son and nephew get along because francis’s father asked him to adopt his son before he died.  along the way, the boys put on a show for the all of francis’s country neighbors, involving magic tricks performed by robert, sounder singing in accompaniment with francis’s harmonica, and a lecture on swamp plants by mr. brown.  the story has a happy ending, with robert leaving with the browns, whiskers, and sounder, to start a new life where he can freely pursue music. timmy: the dog that was different.  indianapolis: bobbs-merrill, . pages. illustrated by will rannells; dedicated to the boys and girls who liked jack, jock and funny. in this sequel to the great adventures of jack, jock and funny, annabel and her twin brothers, charles and chester, live with their father (an insurance salesman), mother (a homemaker), and grandfather (a retired store owner), on a farm.  for a summer job, charles and chester decide to run a kennel as they had the previous summer, using the old horse stalls on the property for dogs.  their first client is the old hermit who lives next door, mr. smith.  his friend has a cocker-spaniel named timmy that needs boarding, on the condition that he sleep next to someone’s bed.  a variety of dogs come to stay at the kennel over the summer, but timmy stands apart from the others because he is deaf, and there is mystery as to his owner’s relationship to mr. smith and their foreign heritage.  annabel believes timmy belongs to a princess, and in the end, we learn that timmy in fact belongs to an exiled royal family, making their way from persia to canada. mount delightful: the story of ellen evans and her dog taffy.  indianapolis: bobbs-merrill, . pages. illustrated by sandra james; dedicated to ada norman clark [adah clark was librarian of pataskala public library].  the skitter cat book. indianapolis: bobbs-merrill, . pages. illustrated by j. j. taber; dedicated to william c. youmans, the little boy of the story. short stories "hearts by freight."  boston daily globe ( may ): . words.   "the man who wanted a dog that would kill."  the american magazine . (oct. ): +. words. illustrated by douglas duer. dudley forrester, appropriately nicknamed “dud,” marries zoe forrester and takes her west after her parents suddenly pass away.  he is abusive and goes so far as to shoot her dog, rover, because the dog is unable to protect the farm animals from mountain lions.  dud replaces rover with a young airedale that he calls "frowsy."  zoe dislikes the name, and refers to the puppy as "duffy" instead.  duffy is ferocious enough to hold no fear of the mountain lions, yet retains a gentle nature and forms an affectionate bond with zoe.  despite her love for the dog, zoe surmises that dud stole the airedale from a kennel and confronts her husband with her suspicions.  following her accusation, dud attempts to kick his wife.  in response to dud’s violent outburst, duffy lunges at the man in order to protect zoe.  unable to get a stranglehold on his opponent as the airedale has been trained to do, the dog is unable to physically harm zoe’s attacker.  instead, duffy literally scares dud to death.  the man who wanted a dog to kill has gotten exactly that.  the true owner of the dog soon arrives—it is david moore, zoe’s cousin.  thus, she is reunited with her family, freed of her abusive husband, and will get to stay with her savior, duffy.  "the adventures of skitter cat."  dinty the porcupine and other stories.  true story series.  book three.  eds. clara b. baker and edna d. baker.  bobbs-merrill, .  - . illustrated by vera stone norman. "skitter cat." winding roads. eds. wilhelmina harper and aymer jay hamilton. new york: macmillan, . - . "cinder." child life . (june ): - . words. illustrated by ruth eger. “the kidnapped pup.” junior home (june-july ). - . illustrated by don nelson. "cinder and inky." child life . (sept. ): - . illustrated by ruth eger. "cinder and inky again."  child life . (aug. ): - . words. illustrated by ruth eger. inky, a black kitten who is a secondary character in the longer work cinder, is central protagonist in this short story.  fascinated by the robins nesting in his backyard apple tree, inky climbs up the limbs to get a closer look.  unfortunately for him, he peers a little too close and the feathered parents attack.  despite the protest from the birds, inky has climbed too high in the tree and is unable to retreat.  hearing his frantic mews and the robins’ squawking, cinder—the tan toy terrier who lives next door—comes to her feline friend’s rescue.  she barks and jumps at the base of the tree until her owner notices the commotion and rigs a meat-laden basket up into the branches to facilitate a safe landing for the curious kitten.  "camping in the rocky mountains." friends around the world. the curriculum readers, vol. . eds. clara belle baker, mary maud reed, and edna dean baker. indianapolis: bobbs-merrill, . - . "teddy horse." friends around the world. the curriculum readers, vol. . eds. clara belle baker, mary maud reed, and edna dean baker. indianapolis: bobbs-merrill, . - .  "baby seal." friends here and away. the curriculum readers, vol. . clara belle baker, mary maud reed, and edna dean baker, eds. indianapolis: bobbs-merrill, . - . "skitter in the well." friends here and away. the curriculum readers, vol. . clara belle baker, mary maud reed, and edna dean baker, eds. indianapolis: bobbs-merrill, . - . "skitter cat and major." the road to safety around the year. eds. horace mann buckley, margaret l. white, alice b. adams, and leslie r. silvernale. new york: american book company, . - . illustrated by ruth bennett. home subscribe to: posts (atom) eleanor williams youmans with black and tan terrier, jill, and barely visible black and white cat, whiskers, blog archive ▼  ( ) ▼  march ( ) hearts by freight ►  ( ) ►  october ( ) ►  september ( ) ►  august ( ) ►  july ( ) ►  june ( ) ►  may ( ) ►  april ( ) short stories "hearts by freight." associated literary press, mcclure syndicate ( ). “the man who wanted a dog that would kill.” the american magazine . (oct. ): +. illustrated by douglas duer. "the adventures of skitter cat." dinty the porcupine and other stories. true story series. book three. eds. clara b. baker and edna d. baker. indianapolis: bobbs-merrill, . - . illustrated by vera stone norman. "skitter cat." winding roads. eds. wilhelmina harper and aymer jay hamilton. new york: macmillan, . - . illustrated by maud and miska persham. "cinder." child life . (june ): - . illustrated by ruth eger. “the kidnapped pup.” junior home (june-july ): - , . "cinder and inky." child life . (sept. ): - . illustrated by ruth eger. "cinder and inky again." child life . (aug. ): - . illustrated by ruth eger. "camping in the rocky mountains." friends around the world. the curriculum readers, vol. . eds. clara belle baker, mary maud reed, and edna dean baker. indianapolis: bobbs-merrill, . - . illustrated by vera stone norman. "teddy horse." friends around the world. the curriculum readers, vol. . eds. clara belle baker, mary maud reed, and edna dean baker. indianapolis: bobbs-merrill, . - . illustrated by vera stone norman. "baby seal." friends here and away. the curriculum readers, vol. . eds. clara belle baker, mary maud reed, and edna dean baker. indianapolis: bobbs-merrill, . - . illustrated by vera stone norman and mildred lyon hetherington. "skitter in the well." friends here and away. the curriculum readers, vol. . clara belle baker, mary maud reed, and edna dean baker, eds. indianapolis: bobbs-merrill, . - . illustrated by vera stone norman and mildred lyon hetherington. "skitter cat and major." the road to safety around the year. eds. horace mann buckley, margaret l. white, alice b. adams, and leslie r. silvernale. new york: american book company, . - . and other stories appearing in the cat courier ( - ), mother's magazine ( - ), fur and feathers (london, sept. ), junior home, and the animals' magazine (london, before ) novels skitter cat. illustrated by ruth bennett. indianapolis: bobbs-merrill, . skitter cat and little boy. illustrated by ruth bennett. indianapolis: bobbs-merrill, . skitter cat and major. illustrated by ruth bennett. indianapolis: bobbs-merrill, . skitter and skeet. illustrated by ruth bennett. indianapolis: bobbs-merrill, . teddy horse—the story of a runaway pony. illustrated by ruth king. indianapolis: bobbs-merrill, . teddy horse—the story of a runaway pony. london: elkin mathews and marrot, . cinder—the tale of a black and tan toy terrier. illustrated by f. bernard shields. indianapolis: bobbs-merrill, . little dog mack—the story of a wirehaired terrier. illustrated by van trenck. indianapolis: bobbs-merrill, . waif: the story of spe. illustrated by and written in collaboration with will rannells. indianapolis: bobbs-merrill, . the great adventures of jack, jock and funny. illustrated by will rannells. indianapolis: bobbs-merrill, . the forest road: two boys in the ozarks. illustrated by alma wentzel froderstrom. indianapolis: bobbs-merrill, . timmy: the dog that was different. illustrated by will rannells. indianapolis: bobbs-merrill, . mount delightful: the story of ellen evans and her dog taffy. illustrated by sandra james. indianapolis: bobbs-merrill, . the skitter cat book. illustrated by j. j. taber. indianapolis: bobbs-merrill, . poems “the alward school.” ( ; ) pataskala standard. golden edition. may . “pussy's purr.” skitter cat. indianapolis, in: bobbs-merrill, : . “a letter to major.” skitter cat and little boy. indianapolis, in: bobbs-merrill, : . “the long tryst.” ( ; ) pataskala standard oct. : a “little dog lost.” little dog mack. indianapolis, in: bobbs-merrill, : - . and other poems in the pataskala standard and columbus dispatch, - . essays “how to make finger stalls for a little boy’s sore fingers out of old kid gloves.” mother’s magazine, - . "manhattan tour." berkeley daily gazette dec. : . "the alward school." pataskala standard. golden edition. may . unpublished works blooming valley (novel) the heart of ohio (other working titles: no wolves to howl; the sunlit forest) (novel) “the house with six front doors.” (short story) tony of texas (short story) about me jackie jaclyn cruikshank vogt is photo librarian for nebraskaland magazine at the nebraska game and parks commission. she holds a phd in english and women’s and gender studies from the university of nebraska-lincoln, where she specialized in late twentieth-century american literature, women writers, and violence studies. jaclyn has worked as a research assistant at the center for digital research in the humanities, writing center consultant, esl/ell tutor, graduate studies coordinator, and taught a variety of composition, literature, and women’s and gender studies courses. she enjoys jewelry making, spending time with her husband and three cats in their , square foot prairie garden, and recovering the life and work of ohio children’s author eleanor youmans. view my complete profile awesome inc. theme. theme images by enjoynz. powered by blogger. philip agre - wikipedia philip agre from wikipedia, the free encyclopedia   (redirected from philip e. agre) jump to navigation jump to search internet researcher and educator this article has multiple issues. please help improve it or discuss these issues on the talk page. (learn how and when to remove these template messages) this article needs additional or more specific categories. please help out by adding categories to it so that it can be listed with similar articles. (june ) this article needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. find sources: "philip agre" – news · newspapers · books · scholar · jstor (march ) (learn how and when to remove this template message) (learn how and when to remove this template message) philip e. agre is an ai researcher turned humanities professor, formerly a faculty member at the university of california, los angeles. he is known for his critiques of technology.[ ] he was successively the publisher of the network observer (tno) and the red rock eater news service (rre). tno ran from january to july . rre, an influential mailing list he started in the mid- s, ran for around a decade. a mix of news, internet policy and politics, rre served as a model for many of today's political blogs and online newsletters. agre was reported missing on october , . he was found on january , , but never returned to public life.[ ] contents biography "surveillance and capture" disappearance publications . books and chapters . selected academic works . other articles in the media references external links biography[edit] agre grew up in maryland and attended college early.[ ] agre and his collaborator david chapman started their phds under the supervision of michael brady at the mit ai lab. upon brady's departure for oxford, they switched to a then-recent arrival at the laboratory, rodney brooks. brooks gave the two young scientists relatively free rein, but together the three were seen as early major researchers in nouvelle ai, an approach to artificial intelligence emphasizing behavior as emerging in interaction with the environment rather than the entire codification of behavior. this is illustrated by agre and chapman's article, "what are plans for?"[ ] this work is considered seminal to reactive planning, though neither researcher approved of the term. agre went on to receive his doctorate in electrical engineering and computer science from mit in .[ ] he went on to take up a position in the university of chicago department of computer science, later joining the school of cognitive and computing sciences (now the school of informatics) at the university of sussex and finally the department of communication at the university of california, san diego. "surveillance and capture"[edit] agre's essay "surveillance and capture" deals with privacy and surveillance issues made possible by our constantly evolving technological age. influential works preceding this essay include george orwell's nineteen eighty-four ( ), hans magnus enzensberger's constituents of a theory of the media ( ), and michel foucault's works surrounding the concept of panopticism.[ ] foucault argues that a constant exercise of such surveillance is not necessary, since its mere possibility induces self-restrained action among the inmates.[ ] disappearance[edit] on october , , agre's sister filed a missing persons report for agre.[ ] she indicated that she had not seen him since the spring of and became concerned when she learned that he had abandoned his apartment and job sometime between december and may .[ ] agre was found by the la county sheriff's department on january , , and was deemed in good health and self-sufficient.[ ] publications[edit] books and chapters[edit] agre, philip e. ( ). "social skills and the progress of citizenship". in feenberg, andrew; barney, darin (eds.). community in the digital age: philosophy and practice. lanham: rowman & littlefield. isbn  - - - - . —————— ( ). "internet research: for and against". in consalvo, mia; baym, nancy; hussinger, jeremy; jensen, klaus bruhn; logie, john; murero, monica; shade, leslie regan (eds.). internet research annual: selected papers from the association of internet researchers conferences, - . . new york: peter lang. isbn  - - - - . —————— ( ). "information and institutional change: the case of digital libraries". in bishop, ann p.; van house, nancy a.; buttenfield, barbara p. (eds.). digital library use: social practice in design and evaluation. cambridge: mit press. isbn  - - - - . —————— ( ). "writing and representation". in mateas, michael; sengers, phoebe (eds.). narrative intelligence. amsterdam: john benjamins publishing company. isbn  - - - - . —————— ( ). "the practical logic of computer work". in scheutz, matthias (ed.). computationalism: new directions. cambridge: mit press. isbn  - - - - . —————— ( ). "designing genres for new media: social, economic, and political contexts". in jones, steven g. (ed.). cybersociety . : revisiting computer-mediated communication and community. thousand oaks: sage publications. pp.  – . doi: . / .n . isbn  - - - - . —————— ( ). computation and human experience. cambridge: cambridge university press. isbn  - - - - . —————— ( ). "living math: lave and walkerdine on the meaning of everyday arithmetic". in kirshner, david; whitson, james a. (eds.). situated cognition: social, semiotic, and psychological perspectives. mahwah: lawrence erlbaum associates, publishers. isbn  - - - - . . —————— ( ). "introduction: computing as a social practice". in agre, philip e.; schuler, douglas (eds.). reinventing technology, rediscovering community: critical explorations of computing as a social practice. greenwich: ablex publishing. isbn  - - - - . —————— ( ). "beyond the mirror world: privacy and the representational practices of computing". in agre, philip e.; rotenberg, marc (eds.). technology and privacy: the new landscape. cambridge: mit press. isbn  - - - - . —————— ( ). "toward a critical technical practice: lessons learned in trying to reform ai". in bowker, geoffrey c; gasser, les; star, susan leigh; turner, william (eds.). bridging the great divide: social science, technical systems, and cooperative work. mahwah: lawrence erlbaum associates, publishers. isbn  - - - - . —————— ( ). "introduction: computational theories of interaction and agency". in agre, philip e.; rosenschein, stanley j. (eds.). computational theories of interaction and agency. cambridge: mit press. isbn  - - - - . selected academic works[edit] agre, philip e. ( ). "hierarchy and history in simon's "architecture of complexity"". journal of the learning sciences. ( ): – . doi: . /s jls _ – via taylor & francis online. —————— ( ). "p p and the promise of internet equality". communications of the acm. ( ): – . doi: . / . – via acm digital library. —————— ( ). "real-time politics: the internet and the political process". the information society. ( ): – . doi: . / – via taylor & francis online. —————— ( ). "cyberspace as american culture". science as culture. ( ): – . doi: . / – via taylor & francis online. —————— ( ). "changing places: contexts of awareness in computing". human-computer interaction. ( – ): – . doi: . /s hci _ – via taylor & francis online. —————— ( ). "supporting the intellectual life of a democratic society". ethics and information technology. ( ): – . doi: . /a: – via springerlink. —————— ( ). "the market logic of information". knowledge, technology & policy. ( ): – . doi: . /s - - -y – via springerlink. —————— ( ). "commodity and community: institutional design for the networked university". planning for higher education. ( ): – – via society for college and university planning. —————— ( ). "infrastructure and institutional change in the networked university". information, communication & society. ( ): – . doi: . / – via taylor & francis online. —————— ( ). "the distances of education". academe. ( ): – – via university of california, los angeles. —————— ( ). "information technology in higher education: the "global academic village" and intellectual standardization". on the horizon. ( ): – . issn  - – via university of california, los angeles. —————— ( ). "the architecture of identity: embedding privacy in market institutions". information, communication & society. ( ): – . doi: . / – via taylor & francis online. ——————; horswill, ian ( ). "lifeworld analysis" (pdf). journal of artificial intelligence research. ( ): – – via google scholar. —————— ( ). "institutional circuitry: thinking about the forms and uses of information". information technology and libraries. ( ): – – via university of california, los angeles. —————— ( ). guzeldere, guven; franchi, stefano (eds.). "constructions of the mind: artificial intelligence and the humanities". stanford humanities review. ( ): – – via university of california, los angeles. —————— ( ). "the assq chip and its progeny". mit artificial intelligence laboratory working papers (wp- ). mit artificial intelligence laboratory: – – via mit libraries. cite journal requires |journal= (help) other articles in the media[edit] agre, philip e. (december ). "welcome to the always-on world". ieee spectrum. pp.  , . archived from the original on december , . —————— (winter ). "your face is not a bar code: arguments against automatic face recognition in public places". whole earth review. no.  . pp.  – . —————— ( ). "life after cyberspace". easst review. vol.  no.  / . european association for the study of science and technology. pp.  – . —————— (july , ). "yesterday's tomorrow". times literary supplement. news uk. pp.  – . issn  - x. references[edit] ^ a b c "he predicted the dark side of the internet years ago. why did no one listen?". washington post. issn  - . retrieved - - . ^ agre, phil; chapman, david ( ). "what are plans for?". robotics and autonomous systems. ( – ): – . doi: . /s - ( ) - . hdl: . / – via sciencedirect. ^ "the dynamic structure of everyday life". dspace.mit.edu. massachusetts institute of technology. hdl: . / . archived from the original on september , . retrieved - - . ^ a b montfort, nick, and noah wardrip-fruin. "surveillance and capture: two models of privacy." the new media reader. cambridge, mass.: mit, . - . print. ^ a b pescovitz, david (november , ). "missing: phil agre, internet scholar". boing boing. retrieved january , . ^ carvin, andy (january , ). "missing internet pioneer phil agre is found alive". npr. retrieved january , . external links[edit] former home page for agre at ucla authority control general viaf worldcat national libraries united states netherlands scientific databases dblp (computer science) retrieved from "https://en.wikipedia.org/w/index.php?title=philip_agre&oldid= " categories: artificial intelligence researchers cognitive scientists computer scientists human–computer interaction researchers living people ucla graduate school of education and information studies faculty hidden categories: articles with short description short description is different from wikidata articles needing additional categories from june articles needing additional references from march all articles needing additional references articles with multiple maintenance issues cs errors: missing periodical wikipedia articles with viaf identifiers wikipedia articles with lccn identifiers wikipedia articles with nta identifiers wikipedia articles with dblp identifiers wikipedia articles with worldcatid identifiers year of birth missing (living people) navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages add links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement cardano (blockchain platform) - wikipedia cardano (blockchain platform) from wikipedia, the free encyclopedia jump to navigation jump to search public blockchain platform cardano original author(s) charles hoskinson developer(s) cardano foundation, iohk, emurgo initial release  september ( years ago) ( - - )[ ] stable release . . /  may ( months ago) ( - - )[ ] development status active written in haskell operating system cross-platform type distributed computing license apache license active hosts , [ ] website cardano.org cardano is a public blockchain platform. it is open-source and decentralized, with consensus achieved using proof of stake. it can facilitate peer-to-peer transactions with its internal cryptocurrency, ada.[ ] cardano was founded in by ethereum co-founder charles hoskinson. the development of the project is overseen and supervised by the cardano foundation based in zug, switzerland.[ ][ ] contents background technical aspects history references external links background the platform began development in and was launched in by charles hoskinson, a co-founder of ethereum.[ ][ ][ ] hoskinson left ethereum after a dispute with its co-founder vitalik buterin; hoskinson wanted to accept venture capital and create a for-profit entity while buterin wanted to keep it running as a nonprofit organization. after leaving he co-founded iohk, a blockchain engineering company, whose primary business is the development of cardano, alongside the cardano foundation and emurgo.[ ] the platform is named after gerolamo cardano and the cryptocurrency after ada lovelace.[ ] technical aspects atypically, cardano does not have a white paper. instead, it uses design principles intended to overcome issues faced by other cryptocurrencies, such as scalability, interoperability, and regulatory compliance.[ ] cardano uses a proof-of-stake protocol named ouroboros[ ] in contrast to bitcoin and ethereum which use proof-of-work protocols.[ ] proof-of-stake blockchains use significantly less energy than proof-of-work chains.[ ] in february , hoskinson estimated the cardano network used gwh annually, less than . % of the . twh used by the bitcoin network as estimated by the university of cambridge.[ ][ ] cardano reached a "market cap" of $ billion in may and was considered the biggest proof of stake cryptocurrency.[ ] [ ] within the cardano platform, ada exists on the settlement layer. this layer is similar to bitcoin and keeps track of transactions. the second layer is the computation layer. this layer is designed to be similar to ethereum, enabling smart contracts and applications to run on the platform.[ ] cardano expects to implement decentralized finance (defi) services in with an upgrade to enable smart contracts and the ability to build decentralized applicationss (dapps). also included is plutus, a turing-complete smart contract language written in haskell, and a specialised smart contract language, marlowe, designed for non-programmers in the financial sector. cardano's smart contract languages allow developers to run end-to-end tests on their program without leaving the integrated development environment or deploying their code.[ ][ ] history cardano was funded through an initial coin offering.[ ] the currency debuted with a market cap of $ million. by the end of , it had a market cap of $ billion, and reached a value of $ billion briefly in before a general tightening of the crypto market dropped its value back to $ billion. according to mashable, cardano claims that it overcomes existing problems in the crypto market: mainly that bitcoin is too slow and inflexible, and that ethereum is not safe or scalable.[ ] iohk has partnered with universities for blockchain research. in , iohk helped the university of edinburgh launch the blockchain technology laboratory.[ ][ ][ ] in , iohk donated $ , in ada to the university of wyoming to support the development of blockchain technology.[ ] in , the ministry of education in georgia signed a memorandum of understanding with the free university of tbilisi to use cardano and atala to build a credential verification system for georgia.[ ] in , footwear manufacturer new balance announced a pilot program on the cardano blockchain to track the authenticity of its newest basketball shoe.[ ] iohk announced a partnership with the ethiopian government in to deploy their technology in a variety of industries throughout the country.[ ] in april , iohk and the ethiopia ministry of education announced plans to launch an identity and record-keeping system on cardano for the country's five million students.[ ] references ^ "releases - input-output-hk/cardano-sl". retrieved october – via github. ^ "releases - input-output-hk/cardano-node". retrieved may – via github. ^ "cardano pooltool". pooltool.io. retrieved may . ^ "die grundlagen der cardano-kryptowährung". hamburg-magazin.de (in german). june . retrieved july . ^ "bitcoin's smaller cousins". bloomberg l.p. december . archived from the original on june . retrieved december . cardano, backed by the zug, switzerland-based cardano foundation, is a decentralized public blockchain that aims to protect user privacy, while also allowing for regulation ^ "zug: ex-tezos-mann geht zu cardano". luzernerzeitung.ch (in german). luzerner zeitung. february . retrieved july . ^ "ethereum cofounder says blockchain presents 'governance crisis'". fortune. retrieved april . ^ "icos explained". cnbc. october . retrieved december . ethereum co-founder charles hoskinson says it has become increasingly more challenging to regulate this new asset class" and "ico market could crash ^ a b au-yeung, angel ( february ). "a fight over ethereum led a cofounder to even greater crypto wealth". forbes magazine. retrieved july . iohk's key project: cardano, a public blockchain and smart-contract platform which hosts the ada cryptocurrency. ^ "what is ada?". retrieved october . ^ "the blockchain galaxy a comprehensive research on distributed ledger technologies" (pdf). deloitte. may . retrieved october . the distinctive feature of cardano is its “research-first” approach to design. ^ badertscher, christian; gaži, peter; kiayias, aggelos; russell, alexander; zikas, vassilis ( january ). "ouroboros genesis: composable proof-of-stake blockchains with dynamic availability". proceedings of the acm sigsac conference on computer and communications security. ccs ' . toronto, canada: association for computing machinery: – . doi: . / . . hdl: . . / f f- d - - bc- f a c. isbn  - - - - . s cid  . ^ a b volpicelli, gian m. "a blockchain tweak could fix crypto's colossal energy problem". wired uk. issn  - . retrieved may . ^ "bitcoin's wild ride renews worries about its massive carbon footprint". cnbc. february . retrieved february . bitcoin has a carbon footprint comparable to that of new zealand ^ ponciano, jonathan. "cardano surges during $ billion crypto crash as musk eyes sustainable bitcoin alternatives". forbes. retrieved may . ^ "what is cardano? the 'green' crypto that hopes to surpass the tech giants". the independent. may . retrieved june . ^ "cryptocurrency goes green: could 'proof of stake' offer a solution to energy concerns?". nbc news. retrieved june . ^ a b "cardano: a rising cryptocurrency". mashable. february . retrieved december . cardano claims it will solve most of the issues that plague well-established cryptocurrencies such as bitcoin and ethereum ^ "say hello to iohk's new cardano blockchain tools, plutus and marlowe". crowdfund insider. december . retrieved february . where programming ethereum requires coding in two languages, solidity for the on-chain code and javascript for the off-chain parts, and other systems suffer a similar split, plutus is the only system that provides an integrated language for both, based on haskell ^ emma newbery ( ) “why cardano could be an 'ethereum killer'”, the motley fool, july . https://www.fool.com/the-ascent/cryptocurrency/articles/why-cardano-could-be-an-ethereum-killer/ ^ "cryptocurrencies and blockchain" (pdf). european parliament. july . retrieved october . what distinguishes cardano from ethereum, and from many other cryptocurrencies, is that it is (one of the first) blockchain projects to be developed and designed from a scientific philosophy by a team of leading academics and engineers ^ "beyond bitcoin - iohk and university of edinburgh establish blockchain technology laboratory". the university of edinburgh. retrieved march . ^ "iohk and university of edinburgh establish blockchain technology laboratory". finextra.com. retrieved october . ^ "the university of edinburgh is launching a blockchain research lab with one of the cofounders of ethereum". www.insider.com. retrieved october . ^ "uw receives $ , gift in ada cryptocurrency from iohk". www.uwyo.edu. retrieved october . ^ "ministry of education signs deal with cardano atala". forbes georgia. june . retrieved march . ^ "sneakers meet the blockchain in new balance shoe authenticity pilot". siliconangle. october . retrieved march . ^ phillips, ruari ( june ). "ethiopian government-cardano technology team up on blockchain". east african business week. retrieved march . ^ sorkin, andrew ross; karaian, jason; kessler, sarah; merced, michael j. de la; hirsch, lauren; livni, ephrat ( april ). "tesla makes money (including from selling cars)". the new york times. issn  - . retrieved april . external links project website cardano foundation iohk website iohk on youtube official telegram group v t e cryptocurrencies technology blockchain cryptocurrency tumbler cryptocurrency exchange cryptocurrency wallet cryptographic hash function decentralized exchange decentralized finance distributed ledger fork lightning network metamask non-fungible token smart contract consensus mechanisms proof of authority proof of personhood proof of space proof of stake proof of work proof of work currencies sha- -based bitcoin bitcoin cash counterparty lbry mazacoin namecoin peercoin titcoin ethash-based ethereum ethereum classic scrypt-based auroracoin bitconnect coinye dogecoin litecoin equihash-based bitcoin gold zcash randomx-based monero x -based dash petro other ambacoin firo iota primecoin verge vertcoin proof of stake currencies algorand cardano eos.io gridcoin nxt peercoin polkadot steem tezos tron erc- tokens augur aventus bancor basic attention token chainlink kin kodakcoin minds the dao uniswap stablecoins dai diem tether usd coin other currencies ankr chia filecoin hbar (hashgraph) nano neo ripple safemoon stellar whoppercoin related topics airdrop bitlicense blockchain game complementary currency crypto-anarchism cryptocurrency bubble cryptocurrency scams digital currency distributed ledger technology law double-spending hyperledger initial coin offering initial exchange offering initiative q list of cryptocurrencies token money virtual currency category commons list retrieved from "https://en.wikipedia.org/w/index.php?title=cardano_(blockchain_platform)&oldid= " categories: blockchains cross-platform software alternative currencies cryptocurrencies hidden categories: cs german-language sources (de) wikipedia extended-confirmed-protected pages articles with short description short description is different from wikidata use dmy dates from march navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read view source view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages català الدارجة deutsch eesti Ελληνικά español فارسی français italiano nederlands 日本語 oʻzbekcha/ўзбекча polski português română Русский slovenčina slovenščina Тоҷикӣ türkçe Українська edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement ibm rt pc - wikipedia ibm rt pc from wikipedia, the free encyclopedia   (redirected from ibm rt) jump to navigation jump to search early risc workstation from ibm this article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. please help to improve this article by introducing more precise citations. (november ) (learn how and when to remove this template message) ibm rt pc (risc technology personal computer) developer ibm / ibm research manufacturer ibm type workstation computers release date ;  years ago ( ) discontinued may operating system aix operating system, academic operating system (aos), or pick operating system cpu ibm romp memory mb ram, expandable to mb successor ibm rs/ the ibm rt pc (risc technology personal computer) is a family of workstation computers from ibm introduced in . these were the first commercial computers from ibm that were based on a reduced instruction set computer (risc) architecture. the rt pc used ibm's proprietary romp microprocessor, which commercialized technologies pioneered by ibm research's experimental minicomputer (the was the first risc). the rt pc ran three operating systems: aix, the academic operating system (aos), or pick. the rt pc's performance was relatively poor compared to other contemporary workstations and it had little commercial success as a result; ibm responded by introducing the rs/ workstations in , which used a new ibm-proprietary risc processor, the power . all rt pc models were discontinued by may . contents hardware . academic system software sales and market reception as part of the nsfnet backbone references further reading external links hardware[edit] two basic types were produced: a floor-standing desk-side tower, and a table-top desktop. both types featured a special board slot for the processor card, as well as machine-specific ram cards. each machine had one processor slot, one co-processor slot, and two ram slots. there were three versions of the processor card: the standard processor card or card had a .  mhz clock rate (  ns cycle time),  mb of standard memory (expandable via , , or  mb memory boards). it could be accompanied by an optional floating-point accelerator (fpa) board, which contained a  mhz national semiconductor ns floating point coprocessor. this processor card was used in the original rt pc models ( , , , and a ) announced on january , .[ ][ ] the advanced processor card had a  mhz clock (  ns) and either  mb memory on the processor card, or external  mb ecc memory cards, and featured a built-in  mhz motorola floating-point processor. the advanced processor card could be accompanied by an optional advanced floating-point accelerator (afpa) board, which was based around the analog devices adsp- fp multiplier and adsp- fp alu. models , , and b used these cards. these models were announced on february , .[ ] the enhanced advanced processor card sported a .  mhz clock (  ns),  mb on-board memory, while an enhanced advanced floating point accelerator was standard. the models , , and b used these cards. they were announced on july , .[ ] all rt pcs supported up to  mb of memory. early models were limited to  mb of memory because of the capacity of the dram ics used, later models could have up to  mb. i/o was provided by eight isa bus slots. storage was provided by a or  mb hard drive, upgradeable to  mb. external scsi cabinets could be used to provide more storage. also standard were a mouse and either a × or × pixel-addressable display, and a  mbit/s token ring network adapter or base ethernet adapter. for running cadam, a computer-aided design (cad) program, an ibm or graphics processor could be attached. the and were contained in a large cabinet that would have been positioned alongside the rt pc. the was used with a , - by , -pixel ibm display.[ ][ ] academic system[edit] main article: ibm personal system/ §  academic system the academic system was a ps/ model with a risc adapter card, a micro channel board containing a romp, its support ics, and up to  mb of memory. it allowed the ps/ to run romp software compiled for the aos. aos was downloaded from a rt pc running aos, via a lan tcp/ip interface. software[edit] one of the novel aspects of the rt design was the use of a microkernel. the keyboard, mouse, display, disk drives and network were all controlled by a microkernel, called virtual resource manager (vrm), which allowed multiple operating systems to be booted and run at the same time. one could "hotkey" from one operating system to the next using the alt-tab key combination. each os in turn would get possession of the keyboard, mouse and display. both aix version and the pick operating system were ported to this microkernel. pick was unique in being a unified operating system and database, and ran various accounting applications. it was popular with retail merchants, and accounted for about , units of sales. the primary operating system for the rt was aix version . much of the aix v kernel was written in a variant of the pl/i programming language, which proved troublesome during the migration to aix v . aix v included full tcp/ip networking support, as well as sna, and two networking file systems: nfs, licensed from sun microsystems, and ibm distributed services (ds). ds had the distinction of being built on top of sna, and thereby being fully compatible with ds on the ibm midrange as/ and mainframe systems. for the graphical user interfaces, aix v came with the x r and later the x r and x releases of the x window system from mit, together with the athena widget set. compilers for c and fortran programming languages were available. some rt pcs were also shipped with the academic operating system (aos), an ibm port of . bsd unix to the rt pc. it was offered as an alternative to aix, the usual rt pc operating system, to us universities eligible for an ibm educational discount. aos added a few extra features to . bsd, notably nfs, and an almost ansi c-compliant c compiler. a later version of aos existed that was derived from . bsd-reno, but it was not widely distributed. the rt forced an important stepping-stone in the development of the x window system, when a group at brown university ported x version to the system. problems with reading unaligned data on the rt forced an incompatible protocol change, leading to version in late . sales and market reception[edit] the ibm rt had a varied life even from its initial announcement. most industry watchers considered the rt as "not enough power, too high a price, and too late."[citation needed] many thought that the rt was part of ibm's personal computer line of computers. this confusion started with its initial name, "ibm rt pc". initially, it seemed that even ibm thought that it was a high-end personal computer given the initially stunning lack of support that it received from ibm. this could be explained by the sales commission structure the ibm gave the system: salesmen received commissions similar to those for the sale of a pc. with typically configured models priced at $ , , it was a hard sell, and the lack of any reasonable commission lost the interest of ibm's sales force.[citation needed] both mit's project athena and brown university's institute for research in information and scholarship found the rt inferior to other computers.[ ] the performance of the rt, in comparison with other contemporaneous unix workstations, was not outstanding. in particular, the floating point performance was poor,[citation needed] and was scandalized mid-life with the discovery of a bug in the floating point square root routine.[citation needed] with the rt system's modest processing power (when first announced), and with announcements later that year by some other workstation vendors, industry analysts questioned ibm's directions. aix for the rt was ibm's second foray into unix (its first was pc/ix for the ibm pc in september .) the lack of software packages and ibm's sometimes lackluster support of aix, in addition to sometimes unusual changes from traditional, de facto unix operating system standards, caused most software suppliers to be slow in embracing the rt and aix. the rt found its home mostly in the cad/cam and catia markets, with some inroads into the scientific and educational areas, especially after the announcement of aos and substantial discounts for the educational community. the rt running the pick os also found use as shopping store control systems, given the strong database, accounting system and general business support in the pick os. the rt also did well as an interface system between ibm's larger mainframes, due to its sna and ds support, and some of its point-of-sale terminals, store control systems, and machine shop control systems. approximately , rts were sold over its lifetime, with some , going into ibm's development and sales organizations. pick os sales accounted for about , units. when the rt pc was introduced in january , it competed with several workstations from established providers: the apollo computer domain series , the dec microvax ii, and sun microsystems sun- .[ ] as part of the nsfnet backbone[edit] in , "the nsf starts to implement its t backbone between the supercomputing centers with rt-pcs in parallel implemented by ibm as ‘parallel routers’. the t idea is so successful that proposals for t speeds in the backbone begin. internet history of s the national science foundation network (nsfnet) was the forerunner of the internet. from july to november , the nsfnet's t backbone network used routers built from multiple rt pcs (typically nine) interconnect by a token ring lan.[ ] references[edit] ^ https://www- .ibm.com/common/ssi/showdoc.wss?docurl=/common/ssi/rep_ca/ / /enus - /index.html&lang=en ^ sager, ira ( january ). "ibm retargets tech market with risc-based unix system". electronic news. ^ https://www- .ibm.com/common/ssi/showdoc.wss?docurl=/common/ssi/rep_ca/ / /enus - /index.html&lang=en ^ https://www- .ibm.com/common/ssi/showdoc.wss?docurl=/common/ssi/rep_ca/ / /enus - /index.html&lang=en ^ derfler, jr., frank j. ( june ). "is there a workstation in your future?". pc magazine. pp.  – , . ^ peddie, jon ( ). the history of visual magic in computers. springer. pp.  – . ^ garfinkel, simson l. (may–june ). "ripples across the academic market" (pdf). technology review. pp.  – . retrieved january . ^ seymour, jim ( june ). "marketing the ibm rt pc". pc magazine. p.  . ^ claffy, kimberly c.; braun, hans-werner; polyzos, george c. (august ). "tracking long-term growth of the nsfnet". communications of the acm. ( ). doi: . / . . further reading[edit] simpson, r.o. ( ). "the ibm rt personal computer". byte, extra edition. pp.  , . hoffman, thomas v. ( ). "pc tech journal: december, a significant departure". ziff-davis. cite magazine requires |magazine= (help) — contains significant technical articles about the machine, processor and architecture. waters, frank; henry, g glen ( ). ibm rt personal computer technology. ibm engineering system products. — ibm pub sa - - duntemann, jeff; pronk, ron ( ). inside the powerpc revolution. coriolis group books. — chapter describes the origins of the powerpc architecture in the ibm and rt pc. [ ] ferguson, charles h.; morris, charles r. ( ). computer wars: how the west can win in a post-ibm world. random house. isbn  . — contains an in-depth description of the origins of the rt pc, its development, and subsequent commercial failure. external links[edit] ibm rt pc-page the ibm rt information page jma systems's faq archive video in operation "ibm joins -bit fray with rt line". computerworld. january . p.  . issn  - . aos faq at the wayback machine (archived march , ) this entry incorporates text from the rt/pc faq . v t e ibm history history of ibm mergers and acquisitions think (motto) operating systems products hardware mainframe ibm z power systems ibm storage flashsystem ds tape storage ibm q system one notable historical: ibm blue gene cell microprocessor personal computer ibm thinkpad ibm cloud ibm cognos analytics ibm planning analytics watson information management software lotus software rational software spss ilog tivoli software: service automation manager websphere alphaworks criminal reduction utilising statistical history mashup center purequery redbooks fortran connections ibm quantum experience carbon design system business entities center for the business of government global services red hat kenexa international subsidiaries jstart research the weather company (weather underground) facilities towers rené-lévesque, montreal, qc one atlantic center, atlanta, ga software labs rome software lab toronto software lab ibm buildings north wabash, chicago, il johannesburg seattle facilities thomas j. watson research center hakozaki facility yamato facility cambridge scientific center ibm hursley canada head office building ibm rochester initiatives academy of technology deep thunder ibm fellow the great mind challenge developer: develothon linux technology center ibm virtual universe community smarter planet world community grid inventions automated teller machine electronic keypunch hard disk drive floppy disk dram relational model selectric typewriter financial swaps universal product code magnetic stripe card sabre airline reservation system scanning tunneling microscope terminology globally integrated enterprise commercial processing workload consumability e-business ceos thomas j. watson ( – ) thomas watson jr. ( – ) t. vincent learson ( – ) frank t. cary ( – ) john r. opel ( – ) john fellows akers ( – ) louis v. gerstner jr. ( – ) samuel j. palmisano ( – ) ginni rometty ( – ) arvind krishna ( –present) board of directors thomas buberl michael l. eskew david farr alex gorsky michelle j. howard arvind krishna andrew n. liveris martha e. pollack virginia m. rometty joseph r. swedish sidney taurel peter r. voser other a boy and his atom common public license/ibm public license customer engineer deep blue deep thought dynamic infrastructure guide international ibm and the holocaust ibm international chess tournament ibm worker organization lucifer cipher mathematica ibm plex share computing scicomp q experience sports teams american football rugby union globalfoundries retrieved from "https://en.wikipedia.org/w/index.php?title=ibm_rt_pc&oldid= " categories: ibm workstations computer-related introductions in hidden categories: articles with short description short description matches wikidata articles lacking in-text citations from november all articles lacking in-text citations all articles with unsourced statements articles with unsourced statements from august cs errors: missing periodical webarchive template wayback links navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages deutsch español italiano magyar 日本語 norsk bokmål Русский Српски / srpski suomi edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement roach motel - wikipedia roach motel from wikipedia, the free encyclopedia jump to navigation jump to search for other uses, see roach motel. roach motel product type insect trap owner black flag country united states introduced ;  years ago ( ) registered as a trademark in united states tagline roaches check in, but they don't check out! roach motel is a brand of a roach bait device designed to catch cockroaches. although the term is the subject of a trademark registration by the insect control brand black flag, the phrase roach motel has come to be used as a reference to all traps that use a scent or other form of bait to lure cockroaches into a compartment in which a sticky substance causes them to become trapped. introduced in late in response to the success of d-con's roach trap, the roach motel quickly became a successful entrant in the industry. by , new york magazine reported, "on the strength of its whimsical packaging and an aggressive ad campaign, the roach motel now dominates the market, outselling the closest competition by as much as three to one in some cities."[ ] early versions of the roach motels used food-based bait, but later designs incorporated pheromones. the widely known tagline of the roach motel was, "roaches check in, but they don't check out!" this phrase has also been applied to information technology systems such as enterprise resource planning, which readily capture transactions data but make it difficult for organizations to access, report and analyze the data stored in the system.[ ] black flag also marketed a related insect trap, the "flyport," designed to deal with unwanted flying insects like houseflies, with the tagline, "lots of arrivals, but no departures!" it was less successful than the roach motel. "roach motel" is united states federal trademark no. , , , for which black flag claims a date of earliest use of may , .[ ][ ] references[edit] ^ scot haller, "checkout time at the roach motel," new york (jul - , ), v. , no. , p. . ^ vizard, michael ( august ). "are erp apps really worth all the trouble?". idg. retrieved november . ^ "am. home products v. johnson chemical co., f. d , ( nd cir., )". google scholar. retrieved - - . ^ u.s. trademark , , external links[edit] look up roach motel in wiktionary, the free dictionary. information at black flag website this product article is a stub. you can help wikipedia by expanding it. v t e retrieved from "https://en.wikipedia.org/w/index.php?title=roach_motel&oldid= " categories: insecticide brands products introduced in trademarks american brands brand name products stubs hidden categories: all stub articles navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages 日本語 粵語 中文 edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement none fedora migration paths and tools project update: may - duraspace.org projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter about duraspace projects services community membership support news & events projects dspace fedora vivo who’s using services archivesdirect dspacedirect duracloud community our users community programs service providers strategic partners membership values and benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter home » latest news » fedora migration paths and tools project update: may fedora migration paths and tools project update: may posted on may , by david wilcox this is the eighth in a series of monthly updates on the fedora migration paths and tools project – please see last month’s post for a summary of the work completed up to that point. this project has been generously funded by the imls. the university of virginia has completed their data migration and successfully indexed the content into a new fedora . instance deployed in aws using the fcrepo-aws-deployer tool. they have also tested the fcrepo-migration-validator tool and provided some initial feedback to the team for improvements. some work remains to update front-end indexes for the content in fedora . , and the team will also investigate some performance issues that were encountered while migrating and indexing content in the amazon aws environment in order to document any relevant recommendations for institutions wishing to migrate to a similar environment. based on this work, we will be offering an initial online workshop on migrating from fedora .x to fedora . . this workshop is free to attend with limited capacity so please register in advance. this is a technical workshop pitched at an intermediate level. prior experience with fedora is preferred, and participants should be comfortable using a command line interface and docker. the workshop will take place on june at am et. the whitman college team has been busy iterating on test migrations of representative collections into a staging server using the islandora_workbench tool. the team has been making updates to the migration tool, site configuration, and documentation along the way to better support future migrations. in particular, the work the team has done to iterate on the spreadsheets until they were properly configured for ingest will be very useful to other institutions interested in following a similar path. once the testing and validation of functional requirements is complete we will begin the full migration into the production site. we are nearing the end of the pilot phase of the grant, after which we will finalize a draft of the migration toolkit and share it with the community for feedback. while this toolkit will be openly available for anyone who would like to review it, we are particularly interested in working with institutions with existing fedora .x repositories that would like to test the tools and documentation and provide feedback to help us improve the resources. if you would like to be more closely involved in this effort please contact david wilcox for more information. tags: blog, fedora, fedora repository, news duraspace articles recent articles rss feeds tags announcements ( ) blog ( ) cloud ( ) coar ( ) communication ( ) community ( ) conferences ( ) data curation ( ) dspace ( ) dspace ( ) dspacedirect ( ) duracloud ( ) duraspace ( ) duraspace digest ( ) education ( ) events ( ) fedora ( ) fedora repository ( ) governance ( ) higher education ( ) hydra ( ) islandora ( ) linked data ( ) lyrasis ( ) lyrasis digest ( ) meetings ( ) ndsa ( ) news ( ) open access ( ) open data ( ) open repositories ( ) open source ( ) preservation and archiving ( ) professional development ( ) registered service provider ( ) repository ( ) samvera ( ) scholarly publishing ( ) sparc ( ) technology ( ) vivo ( ) vivo camp ( ) vivo conference ( ) vivo updates ( ) web seminar ( ) about about duraspace history what we do board of directors meet the team policies reports community our users community programs service providers strategic partners membership values & benefits current members financial contributors become a member support choosing a project choosing a service technical specifications wiki contact us news & events latest news event calendar webinars monthly newsletter this work is licensed under a creative commons attribution . international license none wildcat banking - wikipedia wildcat banking from wikipedia, the free encyclopedia jump to navigation jump to search period of banking in u.s. history notes of the bank of singapore, michigan wildcat banking was the issuance of paper currency in the united states by poorly capitalized state-chartered banks. these wildcat banks existed alongside more stable state banks during the free banking era from to , when the country had no national currency. states granted banking charters readily and applied regulations ineffectively, if at all.[ ] bank closures and outright scams regularly occurred, leaving people with worthless money. operating primarily as banks of issue rather than deposit banks, wildcat banks circulated currency that was formally redeemable in gold or silver coin, but practically based on other assets such as government bonds or real estate notes. the banks were typically located outside the commercial centers in areas where people possessed land but coins were scarce. instead of depositing coins at the bank for credit, a client could deposit a pledge of his land and receive a loan in notes of the bank that was repayable in cash.[ ] contents forerunners . new england country banks . first federal bank interim background of free banking . jacksonian bank policy . prevalence of wildcat banks free banking in michigan railroad banks late period in popular culture notes references further reading external links forerunners[edit] andrew dexter jr., a wildcat banking pioneer new england country banks[edit] see also: suffolk bank and suffolk system the earliest example of what came to be called wildcat banking began in new england during the s. the banking establishment of boston was opposed by a greater number of country banks throughout the region. because the city banks refused the country banks' currency, it came to dominate the commercial activity of boston, while the city banks' notes were paid directly back to them. country bankers soon understood that distance from the city was an advantage, since notes that found their way to boston did not easily return for payment. in the mid- s businessman andrew dexter jr. acquired interests in several of these remote banks to support his construction of a central money exchange in boston. he borrowed extravagantly from the banks and flooded the city with newly issued notes. these included the farmers' exchange bank of gloucester, located in the isolated village of chepachet, rhode island; the berkshire bank, located in pittsfield at the other end of massachusetts; and even the detroit bank, which dexter's associates had established more than miles (  km) away in the newly organized michigan territory.[ ] when the scheme unraveled in , the berkshire bank received more notes for payment in one day than the entire amount outstanding on its books.[ ] farmers' exchange bank made history as the first american bank to fail, with $ on hand to pay $ , in notes.[ ] first federal bank interim[edit] another period of credit expansion by state banks occurred after the expiration of the first bank of the united states in , culminating in the panic of . the bank's prompt collection of state bank notes had enforced a degree of responsibility that soon faded. the burning of washington in late during the war of prompted bank runs across the eastern seaboard and the suspension of specie payments by state governments. city governments and every sort of business resorted to paying their expenses with notes and shinplasters, and the expansion of money could not easily be reined in after the war had ended.[ ] the urgent need to restore coins to circulation was one argument in favor of creating the second bank of the united states in .[ ] at first the new bank accommodated the troubled state banks, and the proliferation of new banks continued. in new york, the law prohibited anyone from forming a corporation for the purpose of banking without a state charter, but did not prevent banking as a side business. by the time the legislature closed the loophole in , the businesses exploiting it included aqueduct companies, turnpike companies, tavern-keepers and glass-makers.[ ] unchartered banking associations were created in the western regions of virginia and pennsylvania to supply the credit needs of local settlers,[ ] as well as in kentucky and ohio. a traveler in the latter states observed "much trouble with paper money" at the end of that could only lead to "penance" and the return to a smaller money stock.[ ] by that time a policy shift by the second bank was already underway. in response to declining crop prices, it called upon state banks for cash payment of the notes that it held. the bank's call was followed by a collapse in prices for american agricultural exports. real estate prices plummeted amid foreclosures, businesses were ruined and a two-year recession followed. the crisis left the bank in better financial condition and the remaining state banks more accountable, but also left resentment of the bank's harsh approach.[ ][ ] background of free banking[edit] jacksonian bank policy[edit] in , as part of his effort to break the political power of the second bank, president andrew jackson ordered the removal of federal funds from the bank to favored state banks, known as pet banks. he subsequently signed the deposit act of , which continued the federal subsidy to state banks and prevented the secretary of the treasury from regulating credit expansion by those banks in the manner that the second bank had. he also issued the specie circular, which required federal land sales to be paid in silver or gold coin and had the effect of drawing those coins from the coast to the developing interior. a collapse in the price of cotton in led the bank of england to limit the flow of money to the united states. this, along with the failure of domestic businesses involved in cotton production, produced the panic of and an economic depression lasting roughly five years. businesses, especially in the west, found it difficult to obtain the hard money to which they had been accustomed and turned to creative methods of finance. in subsequent years democratic party politicians continued to oppose centralized banking, and the supreme court ruled in briscoe v. bank of kentucky that states could issue currency only on the credit of private parties, not that of the state.[ ] prevalence of wildcat banks[edit] the experience of free banking varied across the country. as a system of independent banks chartered by independent legislatures, it suffered from inconsistency, inconvenience and risk, but not every privately organized state bank was a fraudulent or reckless "wildcat." even relatively well-run banks could fail to pay out if a drop in a state's credit devalued the bonds that secured the bank's notes, or if a crisis such as the outbreak of war shook public confidence.[ ][ ] free banking in michigan[edit] jacksonian democrats in michigan advocated for free banking rather than a state monopoly. the term "wildcat banking" arose in reference to the michigan banking boom of the late s. promptly upon becoming a state in , michigan passed the general banking act, which allowed any group of landowners to organize a bank by raising at least $ , capital stock and depositing notes on real estate with the government as security for their bank notes. this law was unprecedented in a country where legislatures normally chartered each bank with a separate act. although it was a regulated system in theory, the commissioners appointed to regulate the banks lacked the resources to do so effectively. a total of banks were established, a surprising number given the capital requirement, and in time several were found to have cheated the law by watering their stock with phony contributions or passing cash from one bank to another ahead of the visiting commissioners.[ ][ ] the banks issued currency notes that could be redeemed in specie only at rural locations, assuming cash was on hand. commissioner alpheus felch recalled that one bank's "cash reserves" consisted of boxes of nails and glass topped with silver coins. anyone who received the notes had to discount them according to their expected redemption value. according to a contemporary newspaper report: "michigan money is thus classed—first quality, red dog; second quality, wild cat; third quality, catamount. of the best quality, it is said, it takes five pecks to make a bushel."[ ] precisely how these terms became associated with the notes is not known.[ ] a counterfeit note in the collection of eric p. newman that features a mountain lion has become known as the "true wild cat note," but it purports to be an note of the catskill bank in new york, with no apparent connection to the events in michigan.[ ] the common explanation is that the banks located their offices in inaccessible areas where animals outnumbered people.[ ] in response to these abuses, michigan suspended new charters under the act. it attempted to create a single closely regulated state bank modeled on the neighboring bank of indiana, but was unable to raise the necessary capital. states continued to experiment with banking regulation in the absence of a federal policy, while arkansas and iowa prohibited banks entirely.[citation needed] railroad banks[edit] a note issued by the banking arm of the erie & kalamazoo railroad the free banking era coincided with the first phase of railroad speculation. not only did banks sponsor railroads, but railroad companies also entered the banking business to finance their expenses. although railroads were indeed built, spectacular failures occurred. the ohio railroad company, established in to build along the coast of lake erie, immediately used a permissive clause in its charter to begin issuing credit notes, which it redeemed from its state funding. the company's failure left several hundred thousand dollars in worthless currency and an unusable track built upon wooden pilings. similar episodes played out in southern states, where railroads received explicit authority to operate as banks. an s railroad boom in mississippi covered the state with speculative routes and railroad bank paper. robert y. hayne organized the south western railroad bank to finance interstate routes from south carolina to ohio, with stringent rules to protect its capital, but it ultimately had to suspend payments on its notes when the railroad ran out of funding.[ ] one of these institutions, the georgia railroad and banking company, survived the free banking era, the civil war and subsequent upheavals, ultimately merging with first union in .[ ] late period[edit] the bank of florence in present-day omaha the s saw a new wave of free banking laws and outbreaks of wildcat banking in tennessee, indiana, wisconsin and the nebraska territory.[ ] the laws of indiana and wisconsin allowed bankers to start business with minimal capital and accepted discounted state bonds at their face value as a security deposit. a "banker" might even pay for the discounted bonds with the same notes that they backed, draw the interest on the bonds, and circulate the surplus notes as he chose.[ ][ ] nebraska declared bank issues a crime in its first legislative session of , but the following year it granted several banking charters, including that of the bank of florence. the third year, a new criminal code omitted the banking provision, allowing banks to organize under general business law.[ ] the panic of wiped out all of the territory's banks, and only one paid all of its notes.[ ] in the federal government passed a national bank act that created a national currency based on federal debt. this was not another centralized system. local private banks issued the new currency, but under uniform rules that prevented confusion about the value of each bank's notes. a heavy tax on the former state bank notes removed them from circulation, bringing the wildcat phenomenon to an end.[ ] in popular culture[edit] in the swedish movie the new land ( ), the character robert is paid in wildcat notes, which is later discovered by his brother karl oskar. notes[edit] ^ krause, chester l.; lemke, robert f. ( ). standard catalog of united states paper money. ^ a b conant, charles a. ( ). a history of modern banks of issue ( th ed.). pp.  – . ^ a b kamensky, jane ( ). the exchange artist. ^ smith, joseph edward a. ( ). the history of pittsfield. . ^ a b knox , pp.  - ^ mcmaster , p.  ^ mcmaster , pp.  - ^ flint, james ( ). letters from america. ^ wilentz, sean ( ). the rise of american democracy: jefferson to lincoln. ^ catterall, ralph charles henry ( ). the first six years of the second bank of the united states. ^ dwyer, gerald p. "wildcat banking, banking panics, and free banking in the united states". economic review, . p. . retrieved december . ^ rolnick, arthur j.; weber, warner e. (fall ). "free banking, wildcat banking, and shinplasters". federal reserve bank of minneapolis quarterly review. ( ). retrieved february . ^ dunbar, william f. ( ). michigan: a history of the wolverine state. p.  . ^ a b utley, h. m. ( ), "the wild cat banking system of michigan", pioneer collections, ^ sumner, william graham ( ). a history of banking in the united states. ^ catskill bank $ contemporary counterfeit ^ white, horace ( ). money and banking. p.  . ^ cleveland, frederick albert; powell, fred wilbur ( ). railroad promotion and capitalization in the united states. pp.  – . ^ jones, robert c. ( ). a history of georgia railroads. ^ knox ^ knox , pp.  – ^ knox , p.  ^ knox , p.  ^ knox , p.  ^ conway, thomas jr.; patterson, ernest m. ( ). the operation of the new bank act. pp.  – . references[edit] knox, john jay ( ). a history of banking in the united states. mcmaster, john bach ( ). a history of the people of the united states. . further reading[edit] allen, larry ( ). the encyclopedia of money ( nd ed.). santa barbara, ca: abc-clio. pp.  – . isbn  - . external links[edit] u.s. banks and money federal reserve bank of atlanta "wildcat bank". encyclopædia britannica. encyclopædia britannica online. encyclopædia britannica inc., . web. retrieved december . v t e money and central banking within the contemporary united states (pre– ) clearinghouse bank bureau de change central bank cheque clearing cheque clearing depository institution bailment banknote chattel/movable property deposit deposit account mutual savings bank possession promissory note redemption safekeeping savings bank savings and loan association mint coinage currency fiat/token money medium of exchange money store of value unit of account treasury bond finance minister government bond legal tender standard of deferred payment – royal mint ( – ) bank of amsterdam ( – ) bills of credit (c. – ) bank of england ( – ) banque générale/banque royale ( – ) tobacco inspection act ( ) maryland tobacco inspection act of currency acts ( ; ) – second continental congress ( – ) u.s. dollar banknotes ( –) continental currency banknotes ( – ) bank of pennsylvania ( – ) u.s. finance superintendent ( – ) bank of north america ( – ) bank of spain ( – ) article i of the u.s. constitution – ; section viii section x u.s. treasury department – ; u.s. treasury secretary u.s. treasury security ( –present) first bank of the united states ( – ) coinage act of united states mint ( – ) u.s. dollar coins ( –) half dime ( – ) half disme half cent ( – ) large cent ( – ) treasury note ( – ) second bank of the united states ( – ) suffolk bank ( – ) danmarks nationalbank ( – ) suffolk system ( – ) new york safety fund system ( – ) bank war ( – ) coinage act of – free banking ( – ) wildcat banking ( – ) forstall system ( – ) independent u.s. treasury ( – ) coinage act of three-cent silver ( – ) coinage act of new york clearing house association ( – ) coinage act of state bank of the russian empire ( – ) demand note ( – ) legal tender act of united states note ( – ) fractional currency ( – ) national bank acts ( ; ) interest bearing note ( – ) national banks system ( – ) national bank note ( –c. ) gold certificate ( – ) compound interest treasury note ( – ) coinage act of two-cent piece ( – ) three-cent nickel ( – ) contraction act of public credit act of – legal tender cases hepburn v. griswold ( ) currency act of national gold bank note ( – ) knox v. lee ( ) coinage act of specie payment resumption act ( ) twenty-cent piece ( – ) bland–allison act ( ) silver certificate ( – ) refunding certificate ( – ) juilliard v. greenman ( ) reichsbank ( – ) sherman silver purchase act ( ) treasury note ( – ) gold standard act ( ) aldrich–vreeland act ( ) federal reserve act ( ) v t e bank regulation in the united states federal authorities consumer financial protection bureau farm credit administration federal deposit insurance corporation federal financial institutions examination council federal housing finance agency federal reserve board of governors financial stability oversight council national credit union administration office of the comptroller of the currency major federal legislation independent treasury act national bank act federal reserve act mcfadden act banking act glass–steagall act federal credit union act bank holding company act interest rate control act of truth in lending act bank secrecy act fair credit reporting act home mortgage disclosure act community reinvestment act electronic fund transfer act financial institutions regulatory and interest rate control act of monetary control act depository institutions act competitive equality banking act of firrea fdicia truth in savings act riegle-neal ibbea gramm–leach–bliley act fair and accurate credit transactions act emergency economic stabilization act credit card act dodd-frank federal reserve board regulations extensions of credit by federal reserve banks (reg a) equal credit opportunity (reg b) home mortgage disclosure (reg c) reserve requirements for depository institutions (reg d) electronic fund transfer (reg e) limitations on interbank liabilities (reg f) international banking operations (reg k) consumer leasing (reg m) loans to insiders (reg o) privacy of consumer financial information (reg p) prohibition against the paying of interest on demand deposits (reg q) credit by brokers and dealers (reg t) credit by banks and persons other than brokers or dealers for the purpose of purchasing or carrying margin stock (reg u) transactions between member banks and their affiliates (reg w) borrowers of securities credit (reg x) truth in lending (reg z) unfair or deceptive acts or practices (reg aa) community reinvestment (reg bb) availability of funds and collection of checks (reg cc) truth in savings (reg dd) types of bank charter credit union federal savings association national bank state bank state authorities california colorado florida illinois maryland michigan new jersey new york ohio oklahoma oregon pennsylvania tennessee virginia terms call report cael rating camels rating system thrift financial report other topics banking in the united states fair debt collection history of central banking in the united states wildcat banking category economics portal banks portal retrieved from "https://en.wikipedia.org/w/index.php?title=wildcat_banking&oldid= " categories: bank regulation in the united states hidden categories: articles with short description short description matches wikidata use american english from march all wikipedia articles written in american english use dmy dates from april all articles with unsourced statements articles with unsourced statements from march navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages svenska edit links this page was last edited on august , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement pooling faq · chia-network/chia-blockchain wiki · github skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this organization all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} chia-network / chia-blockchain notifications star . k fork . k code issues pull requests discussions actions projects wiki security insights more code issues pull requests discussions actions projects wiki security insights pooling faq jump to bottom j. eckert edited this page aug , · revisions general faq technical faq general faq how can i start pooling? you can update by installing chia . + and following the instructions in the pooling user guide. will i need to replot to use the official pooling protocol? yes. anyone who wants to join a pool will need to create new k or above portable plots. this new plot format allows you to switch between pools and self-pooling with a cooldown of ~ minutes ( blocks) between each switch. each switch between pools will require a transaction with a smart contract on the blockchain. our recommendation is to slowly replace your existing plots with portable plots one by one, so you still have a chance to win xch while you convert to all portable plots. what is a plot nft? a plot nft (non fungible token), is a smart coin or token on the blockchain, which allows a user to manage their membership in a pool. user's can assign the plot nft to any pool they want, at any point. when plotting, a plot nft can be selected, and the plot will be tied to that plot nft forever. nfts are "non-fungible" because they are not interchangeable; each plot nft represents a unique pool contract. will i need pay xch to create a plot nft or switch pools? each plot nft you create will require a minimum of mojo ( trillionth of a xch) + transaction fee. for switching pools, you need to pay only a transaction fee. to switch pools. on the first few days of pool launch on mainnet, it's likely you can use transaction fee. for those who don't have any xch, you can get mojos from chia's official faucet: https://faucet.chia.net/ can i farm with both og (original) plots and portable plots? yes. the farmer will support both og plots and portable plots on one machine. for og plots, the . xch and . xch will both be sent to the farmer. the og plots will not be affected in any way by the plot nfts or new plots that you create. how do i assign portable plots to a pool? first you will create a plot nft (devs call this singleton in their code) in the new pools tab in the gui. when you create a new portable plot, you must assign it a specific plot nft (for those using cli, this replaces the pool public key -p with a pool contract address -c). all plots created with the same plot nft can then be assigned to a pool for farming. you can have many plot nfts on the same key. what is the difference between a "key" and a "wallet" in the chia gui and cli? a user can have one or more keys on a machine running chia. a key is represented by the private information ( words) and the public identifier called the fingerprint. when using the gui or the cli, you can only log in to one key at a time. each key must be synced separately, and you can check if it is synced by clicking on the "wallet" tab. each key can also have or more wallets associated with it. the standard wallet, which controls your chia, is created by default. you can also create as many plot nfts as you want, which are also wallets, and each have their own "wallet id" as well, and they are tied to the key that you used to create them. in the cli, you use both fingerprint and wallet_id to perform operations on plot nfts, which represent the key and wallet id for that plot nft. how is chia pooling different from other cryptos? chia has three major differences from most other crypto pooling protocol: ) joining pools are permissionless. you do not need to sign up to an account on a pool server before joining. ) farmers receive / th of xch rewards plus transaction fees, while the pool receives / th of xch rewards to redistribute (minus pool fees) amongst all pool participants. ) the farmer with the winning proof will farm the block, not the pool server. how can i start my own pool? if you have experience writing pool server code for another crypto, adapting that pool code with chia's reference pool code will be straight forward. we only recommend people who have good opsec and business experience to run public pool servers. depending what country you operate your pooling business, you may be subject to tax, aml and kyc laws specific to your jurisdiction. all pools will be targeted by hackers due to the profitability of xch and you may be legally liable if you have any losses. where can i find a list of chia pools? a crypto community site lists all upcoming chia pools: https://miningpoolstats.stream/chia can i advertise my pool in keybase? you can only advertise your pool in keybase @chia_network.public#pools once a day. if you're spammy, mods will warn you and then ban you if you persist. why shouldn't i join the original hpool pool? hpool initially created their own version of chia client that has no source code released with it. there is no telling what kind of malicious activity that client can do. chia network inc discourages everyone from joining any pool that requires custom closed source clients however hpool has since opened a pool using the official pooling protocol and you should feel free to consider that as you look at which pool to use. why doesn't chia run their own official pool? we want there to be a healthy ecosystem of competing pools with no privileged official one having an unfair advantage over the others. can i name my pool chiapool.com? we are not going to allow pools to use "chia" as the first word or its equivalent (the chia pool). you can say things like "a chia pool" though that will probably need a free and easy to get license. go to https://www.chia.net/terms/ to get more information on obtaining a license. if a pool gets % of netspace, can they take over the network? no, chia's pooling protocol is designed where the blocks are farmed by individual farmer, but the pooling rewards go to the pool operator's wallet. this ensures that even if a pool has % netspace, they would also need to control all of the farmer nodes (with the % netspace) to do any malicious activity. this will be very difficult unless all the farmers (with the % netspace) downloaded the same malicious chia client programmed by a bram like level genius. i have more questions, where do i ask? join our dedicated keybase room: @chia_network.public#pools friendly reminder: do not at @ or direct message (dm) developers or mods. just post your questions in keybase and we will answer when we have a moment. technical faq where can i see the chia pool reference code? you can find it here: https://github.com/chia-network/pool-reference. the readme contains an explanation of how it works, and the specification contains details of how to implement it. what programming language is the reference pool code written in? python how hard is it to adapt chia's reference pool code to my pool code? if you've written pool code before, the reference pool code will be easy to understand. it's just replacing pow concepts with chia's method of evaluating each farmer's participation via post and adapting collection and distribution of xch using chia's smart contracts. i am a programmer, but never wrote pool code, will i be able to run a pool with chia's reference pool code? if it's your first time writing pool code, we recommend you look at established btc or eth pools source code and features they provide users. you are likely going to compete with big time pool operators from those crypto communities who will provide feature rich pools for chia on day one. examples of features: leaderboards, wallet explorer, random prizes, tiered pool fees, etc. variable names used in pooling code puzzle_hash: an address but in a different format. addresses are human readable. singleton: a smart coin (contract) that guaranteed to be unique and controlled by the user. launcher_id: unique id of the singleton. points: represent the amount of farming that a farmer has done. it is calculated by number of proofs submitted, weighted by difficulty. one k farms points per day. to accumulate points you need tib farming for a day. this is equivalent to shares in pow pools. how does one calculate a farmer's netspace? a farmer's netspace can be estimated by the number of points submitted over each unit of time, or points/second. each k gets on average points per day. so / = . points/second for each plot. per byte, that is l = . / = . * ^- . to calculate total space s, take the total number of points found p, and the time period in seconds t and do s = p / (l*t). for example for points in hours, use p= , t= , l= . e- , s = /( * . e- ) = bytes. dividing by ^ we get . tib. how does difficulty affect farmer's netspace calculation? as difficulty goes up, a farmer does less lookups and finds less proofs, but does not receive more points per unit of time. imagine this scenario: obtaining proofs a day with difficulty for a k , is equivalent to obtaining proof a day with difficulty . as a pool server, you prefer to receive proof a day per k with difficulty . this is why we allow pool servers to set a minimum difficulty level to reduce the number of proofs each farmer needs to send to prove their netspace. how do you identify the farmer that submitted partial proofs? the farmer will provide their launcher_id which is the id of that farmer's pool group. the pool also verifies the proof of space and the farmer's signature, to make sure that only real farmers are compensated. will pool servers need to keep track of all farmers and their share of rewards? yes, the pool operator will need to write code to keep track of all farmers and their share of rewards. chia's pool protocol assumes no registration is needed to join a pool, so every launcher_id that submits a valid partial proof needs to be tracked by the pool server. what actions can singleton take? there are a few things you can do to the singleton: change pool (needs owner signature) escape pool, this is announcing that you will change pool (needs owner signature) claim rewards (does not need any signature, it goes to the specified address in the singleton) how do pool collect rewards? farmer joins a pool, they will assign their singleton to the pool_puzzle_hash. when a farmer wins a block, the pool rewards will be sent to the p _singleton_puzzle_hash. pool will scan blockchain to find new rewards sent to farmer's singletons. the pool will send a request to claim rewards to the winning farmer's singleton. farmer's singleton will send pool rewards xch to pool_puzzle_hash. pool will periodically distribute rewards to farmers that have points how can i tell if the server is receiving enough partials from a particular client? the number of partials received is the only thing the pool is aware of, the pool does not know the exact total space of the farmer. the space can be computed using the fact that each k plot will earn on average points a day, on mainnet. that means if the difficulty is set to , that's partials per day, if the difficulty is , partial per day per k plot. why am i receiving more points in testnet than mainnet? the points per day per k plot only applies to mainnet, which has a difficulty_constant_factor of ^ . to get the points per day per k on testnet, divide ^ by the testnet difficulty_constant_factor, found in config.yaml, and multiply by . this allows participating easily with k s on testnet. what is the expected ratio between a k and a k ? look at the file win_simulation.py on this repo. this uses the function _expected_plot_size from chia blockchain, which uses the formula: (( * k) + ) * ( ** (k - )) to compute plot size. plug in your k values and divide. how to calculate how many partials with x difficulty a certain plot with y size can get in z time? look at the win_simulation.py file. can i use testnet pooling plots on mainnet? no, you can only use plots created for mainnet in mainnet, and same for testnet. does that mean that forks of chia cannot use these pooling plots? forks of chia can easily use these pooling plots by sending the . xch to the farmer target address, making them all solo plots. if the alternate blockchain wants to do pooling as well, they need to create a special transaction which reserves a singleton by providing the launcher_id, and launcher spend (including owner signature). then the code can automatically assign this singleton to the user who submitted it. does the pooling system support all of the various payment methods used in other blockchain pools? generally, speaking, yes, because the payment system is managed by the pool operator, not the blockchain. we recommend pool operators new to this space opt for something less risky, such a pplns, however on a technical level you can leverage any payment system you want, as the code to do so is managed on your side (this is assuming you are extensively adding on to the reference code or building your own from the ground up as is suggested). however, if you want to opt for something like fpps/pps, you need to be aware of the fact that something like a "dead weight" attack, (which is possible on other chain protocols as well), can be executed in chia by a malicious actor willing to sacrifice large sums of revenue in favor of harming your pool's variance ratio against vs standardized payout plan, potentially running you into the red. it is for this reason we advise against fpps/pps systems, unless you have extensive experience running these pools and how to build mitigations around it to help ensure your stability against variance. what are the api methods a pool server needs to support chia clients? there are a few api methods that a pool needs to support. they are documented here: https://github.com/chia-network/pool-reference/blob/main/specification.md where can i see the video technical q&a on chia pooling: for those interested in the chia pools for pool operators video and presentation, you can find it here: https://youtu.be/xzszwxowpzw https://www.chia.net/assets/presentations/ - - _pooling_for_pool_operators.pdf chia network - green money for a digital world. pages home beginners guide block format chia directory structure logs, database, and more chia glossary chia keys architecture chia keys management chialisp cli commands reference codebase and testing connecting the ui to a remote daemon consensus algorithm summary create a local testnet for fast, private testing faq farming on many machines freebsd install good security practices on many machines help with clvm_tools how to check if everything is working (or not) install is my farm healthy i am not finding proofs k sizes logging messages reference moving plots network architecture networking and serialization openbsd install pool operator development and implementation faq pooling faq pooling user guide protocols quick start guide raspberry pi reference farming hardware reference plotting hardware resolving sync issues port rpc interfaces rpcexamples ssd endurance storage benchmarks the beginners guide the very unofficial keybase i'm new faq timelords windows tips and tricks show more pages… home beginners guide install instructions quick start guide faq - frequently asked questions pooling faq pooling user guide chia project faq plotting basics plot sizes (k-sizes) cli commands reference windows tips & tricks how to check if everything is working (or not) ssd endurance - info on ssd's and plotting reference plotting hardware reference farming hardware farming on many machines good security practices on many machines chialisp documentation (official) chialisp notes timelords and cluster timelords release notes rpc interfaces resolve sync issues - port clone this wiki locally © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. divd- - - kaseya vsa limited disclosure | divd csirt skip to the content. home / cases / divd- - - kaseya vsa limited disclosure divd csirt making the internet safer through coordinated vulnerability disclosure menu home divd csirt cases divd- - telegram group shares stolen credentials.... divd- - divd recommends not exposing the on-premise kaseya unitrends servers to the... divd- - botnet stolen credentials... divd- - multiple vulnerabilities discovered in kaseya vsa.... divd- - a preauth rce vulnerability has been found in vcenter server... divd- - a preauth rce vulnerability has been found in pulse connect secure... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - kaseya recommends disabling the on-premise kaseya vsa servers immediately.... divd- - on-prem exchange servers targeted with -day exploits... divd- - solarwinds orion api authentication bypass... divd- - a list of credentials that phishers gained from victims has leaked and has ... divd- - a list of vulnerable fortinet devices leaked online... divd- - four critical vulnerabilities in vembu bdr... divd- - wordpress plugin wpdiscuz has a vulnerability that alllows attackers to tak... divd- - data dumped from compromised pulse secure vpn enterprise servers.... divd- - .nl domains running wordpress scanned.... divd- - citrix sharefile storage zones controller multiple security updates... divd- - smbv server compression transform header memory corruption... divd- - apache tomcat ajp file read/inclusion vulnerability... divd- - list of mirai botnet victims published with credentials... divd- - exploits available for ms rdp gateway bluegate... divd- - wildcard certificates citrix adc... divd- - citrix adc... cves cve- - - authenticated xml external entity vulnerability in kaseya vs... cve- - - authenticated local file inclusion in kaseya vsa < v . . ... cve- - - fa bypass in kaseya vsa cve- - - authenticated authenticated reflective xss in kaseya vsa cve- - - unautheticated rce in kaseya vsa < v . . ... cve- - - autheticated sql injection in kaseya vsa < v . . ... cve- - - unautheticated credential leak and business logic flaw in ka... cve- - - unauthenticated server side request forgery in vembu product... cve- - - unauthenticated arbitrary file upload and command execution ... cve- - - unauthenticated remote command execution in vembu products... blog - - : kaseya vsa limited disclosure... - - : kaseya case update ... - - : kaseya case update ... - - : kaseya case update... - - : kaseya vsa advisory... - - : vcenter server preauth rce... - - : warehouse botnet... - - : closing proxylogon case / case proxylogon gesloten... - - : vembu zero days... - - : pulse secure preauth rce... more... donate rss contact divd- - - kaseya vsa limited disclosure our reference divd- - case lead frank breedijk author lennaert oudshoorn researcher(s) wietse boonstra lennaert oudshoorn victor gevers frank breedijk hidde smit cve(s) cve- - cve- - cve- - cve- - cve- - cve- - cve- - product kaseya vsa versions all on-premise kaseya vsa versions. recommendation all on-premises vsa servers should continue to remain offline until further instructions from kaseya about when it is safe to restore operations. a patch will be required to be installed prior to restarting the vsa and a set of recommendations on how to increase your security posture. status open english below summary one of our researchers found multiple vulnerabilities in kaseya vsa, which we were in the process of responsible disclosure (or coordinated vulnerability disclosure) with kaseya, before all these vulnerabilities could be patched a ransomware attack happened using kaseya vsa. ever since we released the news that we indeed notified kaseya of a vulnerability used in the ransomware attack, we have been getting requests to release details about these vulnerabilities and the disclosure timeline. in line with the guidelines for coordinated vulnerability disclosure we have not disclosed any details so far. and, while we feel it is time to be more open about this process and our decisions regarding this matter, we will still not release the full details. the vulnerabilities we notified kaseya of the following vulnerabilities: cve- - - a credentials leak and business logic flaw, to be included in . . cve- - - an sql injection vulnerability, resolved in may th patch. cve- - - a remote code execution vulnerability, resolved in april th patch. (v . . ) cve- - - a cross site scripting vulnerability, to be included in . . cve- - - fa bypass, to be resolved in v . . cve- - - a local file inclusion vulnerability, resolved in may th patch. cve- - - a xml external entity vulnerability, resolved in may th patch. what you can do all on-premises vsa servers should continue to remain offline until further instructions from kaseya about when it is safe to restore operations. a patch will be required to be installed prior to restarting the vsa and a set of recommendations on how to increase your security posture. kaseya has released a detection tool tool help determine if a system has been compromised. cado security has made a github repository with resources for dfir professionals responding to the revil ransomware kaseya supply chain attack. we recommend that any kaseya server is carefully checked for signs of compromise before taking it back into service, including, but not limited to, the iocs published by kaseya. what we are doing the dutch institute for vulnerability disclosure (divd) performs a daily scan to detect vulnerable kaseya vsa servers and notify the owners directly or via the known abuse channels, gov-certs, and other trusted channels. we have identified this server by downloading the paths ‘/’, ‘/api/v . /cw/environment’ and ‘/install/kaseyalatestversion.xml’ and matching patterns in these files. in the past few days we have been working with kaseya to make sure customers turn off their systems, by tipping them off about customers that still have systems online, and hope to be able to continue to work together to ensure that their patch is installed everywhere. timeline date description apr research start apr divd starts scanning internet-facing implementations. apr start of the identification of possible victims (with internet-facing systems). apr kaseya informed. apr vendor starts issuing patches v . . . resolving cve- - . may vendor issues another patch v . . . resolving cve- - , cve- - , cve- - . jun divd csirt hands over a list of identified kaseya vsa hosts to kaseya. jun . . on saas resolving cve- - and cve- - . jul divd responds to the ransomware, by scanning for kaseya vsa instances reachable via the internet and sends out notifications to network owners jul limited publication (after months). more information official advisory from kaseya doublepulsar blog post sophos blog post cisa-fbi guidance for msps and their customers affected by the kaseya vsa supply-chain ransomware attack  twitter  facebook  linkedin erambler erambler recent content on erambler mxadm: a small cli matrix room admin tool i’ve enjoyed learning rust (the programming language) recently, but having only really used it for solving programming puzzles i’ve been looking for an excuse to use it for something more practical. at the same time, i’ve been using and learning about matrix (the chat/messaging platform), and running some small rooms there i’ve been a bit frustrated that some pretty common admin things don’t have a good user interface in any of the available clients. so… i decided to write a little command-line tool to do a few simple tasks, and it’s now released as mxadm! it’s on crates.io, so if you have rust and cargo available, installing it is as simple as running cargo install mxadm. i’ve only taught it to do a few things so far: list your joined rooms add/delete a room alias tombstone a room (i.e. redirect it to a new room) i’ll add more as i need them, and i’m open to suggestions too. it uses matrix-rust-sdk, the matrix client-server sdk for rust, which is built on the lower-level ruma library, along with anyhow for error handling. the kind folks in the #matrix-rust-sdk:matrix.org have been particularly kind in helping me get started using it. more details from: source code on tildegit mxadm on crates.io mxadm on lib.rs suggestions, code reviews, pull requests all welcome, though it will probably take me a while to act on them. enjoy! comments are back i forgot to mention it at the time, but i’ve added “normal” comments back to the site, as you’ll see below and on most other pages. in place of the disqus comments i had before i’m now using cactus comments, which is open source and self-hostable (though i’m currently not doing that). if you’ve read my previous post about matrix self-hosting, you might be interested to know that cactus uses matrix rooms for data storage and synchronisation and i can moderate and reply to comments directly from my matrix client. i’ve still left the webmention code in place too so you can still comment that way either on your own site or via social media. intro to the fediverse wow, it turns out to be years since i wrote this beginners guide to twitter. things have moved on a loooooong way since then. far from being the interesting, disruptive technology it was back then, twitter has become part of the mainstream, the establishment. almost everyone and everything is on twitter now, which has both pros and cons. so what’s the problem? it’s now possible to follow all sorts of useful information feeds, from live updates on transport delays to your favourite sports team’s play-by-play performance to an almost infinite number of cat pictures. in my professional life it’s almost guaranteed that anyone i meet will be on twitter, meaning that i can contact them to follow up at a later date without having to exchange contact details (and they have options to block me if they don’t like that). on the other hand, a medium where everyone’s opinion is equally valid regardless of knowledge or life experience has turned some parts of the internet into a toxic swamp of hatred and vitriol. it’s easier than ever to forget that we have more common ground with any random stranger than we have similarities, and that’s led to some truly awful acts and a poisonous political arena. part of the problem here is that each of the social media platforms is controlled by a single entity with almost no accountability to anyone other than shareholders. technological change has been so rapid that the regulatory regime has no idea how to handle them, leaving them largely free to operate how they want. this has led to a whole heap of nasty consequences that many other people have done a much better job of documenting than i could (shoshana zuboff’s book the age of surveillance capitalism is a good example). what i’m going to focus on instead are some possible alternatives. if you accept the above argument, one obvious solution is to break up the effective monopoly enjoyed by facebook, twitter et al. we need to be able to retain the wonderful affordances of social media but democratise control of it, so that it can never be dominated by a small number of overly powerful players. what’s the solution? there’s actually a thing that already exists, that almost everyone is familiar with and that already works like this. it’s email. there are a hundred thousand email servers, but my email can always find your inbox if i know your address because that address identifies both you and the email service you use, and they communicate using the same protocol, simple mail transfer protocol (smtp) . i can’t send a message to your twitter from my facebook though, because they’re completely incompatible, like oil and water. facebook has no idea how to talk to twitter and vice versa (and the companies that control them have zero interest in such interoperability anyway). just like email, a federated social media service like mastodon allows you to use any compatible server, or even run your own, and follow accounts on your home server or anywhere else, even servers running different software as long as they use the same activitypub protocol. there’s no lock-in because you can move to another server any time you like, and interact with all the same people from your new home, just like changing your email address. smaller servers mean that no one server ends up with enough power to take over and control everything, as the social media giants do with their own platforms. but at the same time, a small server with a small moderator team can enforce local policy much more easily and block accounts or whole servers that host trolls, nazis or other poisonous people. how do i try it? i have no problem with anyone for choosing to continue to use what we’re already calling “traditional” social media; frankly, facebook and twitter are still useful for me to keep in touch with a lot of my friends. however, i do think it’s useful to know some of the alternatives if only to make a more informed decision to stick with your current choices. most of these services only ask for an email address when you sign up and use of your real name vs a pseudonym is entirely optional so there’s not really any risk in signing up and giving one a try. that said, make sure you take sensible precautions like not reusing a password from another account. instead of… try… twitter, facebook mastodon, pleroma, misskey slack, discord, irc matrix whatsapp, fb messenger, telegram also matrix instagram, flickr pixelfed youtube peertube the web interplanetary file system (ipfs) which, if you can believe it, was formalised nearly years ago in and has only had fairly minor changes since then! &#x a ;þ e; collaborations workshop : collaborative ideas & hackday my last post covered the more “traditional” lectures-and-panel-sessions approach of the first half of the ssi collaborations workshop. the rest of the workshop was much more interactive, consisting of a discussion session, a collaborative ideas session, and a whole-day hackathon! the discussion session on day one had us choose a topic (from a list of topics proposed leading up to the workshop) and join a breakout room for that topic with the aim of producing a “speed blog” by then end of minutes. those speed blogs will be published on the ssi blog over the coming weeks, so i won’t go into that in more detail. the collaborative ideas session is a way of generating hackday ideas, by putting people together at random into small groups to each raise a topic of interest to them before discussing and coming up with a combined idea for a hackday project. because of the serendipitous nature of the groupings, it’s a really good way of generating new ideas from unexpected combinations of individual interests. after that, all the ideas from the session, along with a few others proposed by various participants, were pitched as ideas for the hackday and people started to form teams. not every idea pitched gets worked on during the hackday, but in the end teams of roughly equal size formed to spend the third day working together. my team’s project: “aha! an arts & humanities adventure” there’s a lot of fomo around choosing which team to join for an event like this: there were so many good ideas and i wanted to work on several of them! in the end i settled on a team developing an escape room concept to help arts & humanities scholars understand the benefits of working with research software engineers for their research. five of us rapidly mapped out an example storyline for an escape room, got a website set up with github and populated it with the first few stages of the game. we decided to focus on a story that would help the reader get to grips with what an api is and i’m amazed how much we managed to get done in less than a day’s work! you can try playing through the escape room (so far) yourself on the web, or take a look at the github repository, which contains the source of the website along with a list of outstanding tasks to work on if you’re interested in contributing. i’m not sure yet whether this project has enough momentum to keep going, but it was a really valuable way both of getting to know and building trust with some new people and demonstrating the concept is worth more work. other projects here’s a brief rundown of the other projects worked on by teams on the day. coding confessions everyone starts somewhere and everyone cuts corners from time to time. real developers copy and paste! fight imposter syndrome by looking through some of these confessions or contributing your own. https://coding-confessions.github.io/ carpenpi a template to set up a raspberry pi with everything you need to run a carpentries (https://carpentries.org/) data science/software engineering workshop in a remote location without internet access. https://github.com/carpenpi/docs/wiki research dugnads a guide to running an event that is a coming together of a research group or team to share knowledge, pass on skills, tidy and review code, among other software and working best practices (based on the norwegian concept of a dugnad, a form of “voluntary work done together with other people”) https://research-dugnads.github.io/dugnads-hq/ collaborations workshop ideas a meta-project to collect together pitches and ideas from previous collaborations workshop conferences and hackdays, to analyse patterns and revisit ideas whose time might now have come. https://github.com/robintw/cw-ideas howdescribedis integrate existing tools to improve the machine-readable metadata attached to open research projects by integrating projects like somef, codemeta.json and howfairis (https://howfairis.readthedocs.io/en/latest/index.html). complete with ci and badges! https://github.com/knowledgecaptureanddiscovery/somef-github-action software end-of-project plans develop a template to plan and communicate what will happen when the fixed-term project funding for your research software ends. will maintenance continue? when will the project sunset? who owns the ip? https://github.com/elichad/software-twilight habeas corpus a corpus of machine readable data about software used in covid- related research, based on the cord dataset. https://github.com/softwaresaved/habeas-corpus credit-all extend the all-contributors github bot (https://allcontributors.org/) to include rich information about research project contributions such as the casrai contributor roles taxonomy (https://casrai.org/credit/) https://github.com/dokempf/credit-all i’m excited to see so many metadata-related projects! i plan to take a closer look at what the habeas corpus, credit-all and howdescribedis teams did when i get time. i also really want to try running a dugnad with my team or for the glam data science network. collaborations workshop : talks & panel session i’ve just finished attending (online) the three days of this year’s ssi collaborations workshop (cw for short), and once again it’s been a brilliant experience, as well as mentally exhausting, so i thought i’d better get a summary down while it’s still fresh it my mind. collaborations workshop is, as the name suggests, much more focused on facilitating collaborations than a typical conference, and has settled into a structure that starts off with with longer keynotes and lectures, and progressively gets more interactive culminating with a hack day on the third day. that’s a lot to write about, so for this post i’ll focus on the talks and panel session, and follow up with another post about the collaborative bits. i’ll also probably need to come back and add in more links to bits and pieces once slides and the “official” summary of the event become available. updates - - added links to recordings of keynotes and panel sessions provocations the first day began with two keynotes on this year’s main themes: fair research software and diversity & inclusion, and day had a great panel session focused on disability. all three were streamed live and the recordings remain available on youtube: view the keynotes recording; google-free alternative link view the panel session recording; google-free alternative link fair research software dr michelle barker, director of the research software alliance, spoke on the challenges to recognition of software as part of the scholarly record: software is not often cited. the fair rs working group has been set up to investigate and create guidance on how the fair principles for data can be adapted to research software as well; as they stand, the principles are not ideally suited to software. this work will only be the beginning though, as we will also need metrics, training, career paths and much more. resa itself has focus areas: people, policy and infrastructure. if you’re interested in getting more involved in this, you can join the resa email list. equality, diversity & inclusion: how to go about it dr chonnettia jones, vice president of research, michael smith foundation for health research spoke extensively and persuasively on the need for equality, diversity & inclusion (edi) initiatives within research, as there is abundant robust evidence that all research outcomes are improved. she highlighted the difficulties current approaches to edi have effecting structural change, and changing not just individual behaviours but the cultures & practices that perpetuate iniquity. what initiatives are often constructed around making up for individual deficits, a bitter framing is to start from an understanding of individuals having equal stature but having different tired experiences. commenting on the current focus on “research excellent” she pointed out that the hyper-competition this promotes is deeply unhealthy. suggesting instead that true excellence requires diversity, and we should focus on an inclusive excellence driven by inclusive leadership. equality, diversity & inclusion: disability issues day ’s edi panel session brought together five disabled academics to discuss the problems of disability in research. dr becca wilson, ukri innovation fellow, institute of population health science, university of liverpool (chair) phoenix c s andrews (phd student, information studies, university of sheffield and freelance writer) dr ella gale (research associate and machine learning subject specialist, school of chemistry, university of bristol) prof robert stevens (professor and head of department of computer science, university of manchester) dr robin wilson (freelance data scientist and ssi fellow) nb. the discussion flowed quite freely so the following summary, so the following summary mixes up input from all the panel members. researchers are often assumed to be single-minded in following their research calling, and aptness for jobs is often partly judged on “time send”, which disadvantages any disabled person who has been forced to take a career break. on top of this disabled people are often time-poor because of the extra time needed to manage their condition, leaving them with less “output” to show for their time served on many common metrics. this can partially affect early-career researchers, since resources for these are often restricted on a “years-since-phd” criterion. time poverty also makes funding with short deadlines that much harder to apply for. employers add more demands right from the start: new starters are typically expected to complete a health and safety form, generally a brief affair that will suddenly become an -page bureaucratic nightmare if you tick the box declaring a disability. many employers claim to be inclusive yet utterly fail to understand the needs of their disabled staff. wheelchairs are liberating for those who use them (despite the awful but common phrase “wheelchair-bound”) and yet employers will refuse to insure a wheelchair while travelling for work, classifying it as a “high value personal item” that the owner would take the same responsibility for as an expensive camera. computers open up the world for blind people in a way that was never possible without them, but it’s not unusual for mandatory training to be inaccessible to screen readers. some of these barriers can be overcome, but doing so takes yet more time that could and should be spent on more important work. what can we do about it? academia works on patronage whether we like it or not, so be the person who supports people who are different to you rather than focusing on the one you “recognise yourself in” to mentor. as a manager, it’s important to ask each individual what they need and believe them: they are the expert in their own condition and their lived experience of it. don’t assume that because someone else in your organisation with the same disability needs one set of accommodations, it’s invalid for your staff member to require something totally different. and remember: disability is unusual as a protected characteristic in that anyone can acquire it at any time without warning! lightning talks lightning talk sessions are always tricky to summarise, and while this doesn’t do them justice, here are a few highlights from my notes. data & metadata malin sandstrom talked about a much-needed refinement of contributor role taxonomies for scientific computing stephan druskat showcased a project to crowdsource a corpus of research software for further analysis learning & teaching/community matthew bluteau introduced the concept of the “coding dojo” as a way to enhance community of practice. a group of coders got together to practice & learn by working together to solve a problem and explaining their work as they go he described models: a code jam, where people work in small groups, and the randori method, where people do pair programming while the rest observe. i’m excited to try this out! steve crouch talked about intermediate skills and helping people take the next step, which i’m also very interested in with the glam data science network esther plomp recounted experience of running multiple carpentry workshops online, while diego alonso alvarez discussed planned workshops on making research software more usable with guis shoaib sufi showcased the ssi’s new event organising guide caroline jay reported on a diary study into autonomy & agency in rse during covid lopez, t., jay, c., wermelinger, m., & sharp, h. ( ). how has the covid- pandemic affected working conditions for research software engineers? unpublished manuscript. wrapping up that’s not everything! but this post is getting pretty long so i’ll wrap up for now. i’ll try to follow up soon with a summary of the “collaborative” part of collaborations workshop: the idea-generating sessions and hackday! time for a new look... i’ve decided to try switching this website back to using hugo to manage the content and generate the static html pages. i’ve been on the python-based nikola for a few years now, but recently i’ve been finding it quite slow, and very confusing to understand how to do certain things. i used hugo recently for the glam data science network website and found it had come on a lot since the last time i was using it, so i thought i’d give it another go, and redesign this site to be a bit more minimal at the same time. the theme is still a work in progress so it’ll probably look a bit rough around the edges for a while, but i think i’m happy enough to publish it now. when i get round to it i might publish some more detailed thoughts on the design. ideas for accessible communications the disability support network at work recently ran a survey on “accessible communications”, to develop guidance on how to make communications (especially internal staff comms) more accessible to everyone. i grabbed a copy of my submission because i thought it would be useful to share more widely, so here it is. please note that these are based on my own experiences only. i am in no way suggesting that these are the only things you would need to do to ensure your communications are fully accessible. they’re just some things to keep in mind. policies/procedures/guidance can be stressful to use if anything is vague or inconsistent, or if it looks like there might be more information implied than is explicitly given (a common cause of this is use of jargon in e.g. hr policies). emails relating to these policies have similar problems, made worse because they tend to be very brief. online meetings can be very helpful, but can also be exhausting, especially if there are too many people, or not enough structure. larger meetings & webinars without agendas (or where the agenda is ignored, or timings are allowed to drift without acknowledgement) are very stressful, as are those where there is not enough structure to ensure fair opportunities to contribute. written reference documents and communications should: be carefully checked for consistency and clarity have all all key points explicitly stated explicitly acknowledge the need for flexibility where it is necessary, rather than implying or hinting at it clearly define jargon & acronyms where they are necessary to the point being made, and avoid them otherwise include links to longer, more explicit versions where space is tight provide clear bullet-point summaries with links to the details online meetings should: include sufficient break time (at least minutes out of every hour) and not allow this to be compromised just because a speaker has misjudged the length of their talk include initial “settling-in” time in agendas to avoid timing getting messed up from the start ensure the agenda is stuck to, or that divergence from the agenda is acknowledged explicitly by the chair and updated timing briefly discussed to ensure everyone is clear establish a norm for participation at the start of the meeting and stick to it e.g. ask people to raise hands when they have a point to make, or have specific time for round-robin contributions ensure quiet/introverted people have space to contribute, but don’t force them to do so if they have nothing to add at the time offer a text-based alternative to contributing verbally if appropriate, at the start of the meeting assign specific roles of: gatekeeper: ensures everyone has a chance to contribute timekeeper: ensures meeting runs to time scribe: ensures a consistent record of the meeting be chaired by someone with the confidence to enforce the above: offer training to all staff on chairing meetings to ensure everyone has the skills to run a meeting effectively matrix self-hosting i started running my own matrix server a little while ago. matrix is something rather cool, a chat system similar to irc or slack, but open and federated. open in that the standard is available for anyone to view, but also the reference implementations of server and client are open source, along with many other clients and a couple of nascent alternative servers. federated in that, like email, it doesn’t matter what server you sign up with, you can talk to users on your own or any other server. i decided to host my own for three reasons. firstly, to see if i could and to learn from it. secondly, to try and rationalise the cambrian explosion of slack teams i was being added to in . thirdly, to take some control of the loss of access to historical messages in some communities that rely on slack (especially the carpentries and rse communities). since then, i’ve also added a fourth goal: taking advantage of various bridges to bring other messaging network i use (such as signal and telegram) into a consistent ui. i’ve also found that my use of matrix-only rooms has grown as more individuals & communities have adopted the platform. so, i really like matrix and i use it daily. my problem now is whether to keep self-hosting. synapse, the only full server implementation at the moment, is really heavy on memory, so i’ve ended up running it on a much bigger server than i thought i’d need, which seems overkill for a single-user instance. so now i have to make a decision about whether it’s worth keeping going, or shutting it down and going back to matrix.org, or setting up on one of the other servers that have sprung up in the last couple of years. there are a couple of other considerations here. firstly, synapse resource usage is entirely down to the size of the rooms joined by users of the homeowner, not directly the number of users. so if users have mostly overlapping interests, and thus keep to the same rooms, you can support quite a large community without significant extra resource usage. secondly, there are a couple of alternative server implementations in development specifically addressing this issue for small servers. dendrite and conduit. neither are quite ready for what i want yet, but are getting close, and when ready that will allow running small homeservers with much more sensible resource usage. so i could start opening up for other users, and at least justify the size of the server that way. i wouldn’t ever want to make it a paid-for service but perhaps people might be willing to make occasional donations towards running costs. that still leaves me with the question of whether i’m comfortable running a service that others may come to rely on, or being responsible for the safety of their information. i could also hold out for dendrite or conduit to mature enough that i’m ready to try them, which might not be more than a few months off. hmm, seems like i’ve convinced myself to stick with it for now, and we’ll see how it goes. in the meantime, if you know me and you want to try it out let me know and i might risk setting you up with an account! what do you miss least about pre-lockdown life? @janethughes on twitter: what do you miss the least from pre-lockdown life? i absolutely do not miss wandering around the office looking for a meeting room for a confidential call or if i hadn’t managed to book a room in advance. let’s never return to that joyless frustration, hey? : am · feb , after seeing terence eden taking janet hughes' tweet from earlier this month as a writing prompt, i thought i might do the same. the first thing that leaps to my mind is commuting. at various points in my life i’ve spent between one and three hours a day travelling to and from work and i’ve never more than tolerated it at best. it steals time from your day, and societal norms dictate that it’s your leisure & self-care time that must be sacrificed. longer commutes allow more time to get into a book or podcast, especially if not driving, but i’d rather have that time at home rather than trying to be comfortable in a train seat designed for some mythical average man shaped nothing like me! the other thing i don’t miss is the colds and flu! before the pandemic, british culture encouraged working even when ill, which meant constantly coming into contact with people carrying low-grade viruses. i’m not immunocompromised but some allergies and residue of being asthmatic as a child meant that i would get sick - times a year. a pleasant side-effect of the covid precautions we’re all taking is that i haven’t been sick for over months now, which is amazing! finally, i don’t miss having so little control over my environment. one of the things that working from home has made clear is that there are certain unavoidable aspects of working in my shared office that cause me sensory stress, and that are completely unrelated to my work. working (or trying to work) next to a noisy automatic scanner; trying to find a light level that works for different people doing different tasks; lacking somewhere quiet and still to eat lunch and recover from a morning of meetings or the constant vaguely-distracting bustle of a large shared office. it all takes energy. although it’s partly been replaced by the new stress of living through a global pandemic, that old stress was a constant drain on my productivity and mood that had been growing throughout my career as i moved (ironically, given the common assumption that seniority leads to more privacy) into larger and larger open plan offices. remarkable blogging and the handwritten blog saga continues, as i’ve just received my new remarkable tablet, which is designed for reading, writing and nothing else. it uses a super-responsive e-ink display and writing on it with a stylus is a dream. it has a slightly rough texture with just a bit of friction that makes my writing come out a lot more legibly than on a slippery glass touchscreen. if that was all there was to it, i might not have wasted my money, but it turns out that it runs on linux and the makers have wisely decided not to lock it down but to give you full root mess. yes, you read that right: root access. it presents as an ethernet device over usb, so you can ssh in with a password found in the settings and have full control over your own devices. what a novel concept. this fact alone has meant it’s built a small yet devoted community of users who have come up with some clever ways of extending its functionality. in fact, many of these are listed on this github repository. finally, from what i’ve seen so far, the handwriting recognition is impressive to say the least. this post was written on it and needed only a little editing. i think this is a device that will get a lot of use! glam data science network fellow travellers updates - - thanks to gene @dzshuniper@ausglam.space for suggesting adho and a better attribution for the opening quote (see comments below for details) see comments & webmentions for details. “if you want to go fast, go alone. if you want to go far, go together.” — african proverb, probably popularised in english by kenyan church leader rev. samuel kobia (original) this quote is a popular one in the carpentries community, and i interpret it in this context to mean that a group of people working together is more sustainable than individuals pursuing the same goal independently. that’s something that speaks to me, and that i want to make sure is reflected in nurturing this new community for data science in galleries, archives, libraries & museums (glam). to succeed, this work needs to be complementary and collaborative, rather than competitive, so i want to acknowledge a range of other networks & organisations whose activities complement this. the rest of this article is an unavoidably incomplete list of other relevant organisations whose efforts should be acknowledged and potentially built on. and it should go without saying, but just in case: if the work i’m planning fits right into an existing initiative, then i’m happy to direct my resources there rather than duplicate effort. inspirations & collaborators groups with similar goals or undertaking similar activities, but focused on a different sector, geographic area or topic. i think we should make as much use of and contribution to these existing communities as possible since there will be significant overlap. code lib probably the closest existing community to what i want to build, but primarily based in the us, so timezones (and physical distance for in-person events) make it difficult to participate fully. this is a well-established community though, with regular events including an annual conference so there’s a lot to learn here. newcardigan similar to code lib but an australian focus, so the timezone problem is even bigger! glam labs focused on supporting the people experimenting with and developing the infrastructure to enable scholars to access glam materials in new ways. in some ways, a glam data science network would be complementary to their work, by providing people not directly involved with building glam labs with the skills to make best use of glam labs infrastructure. uk government data science community another existing community with very similar intentions, but focused on uk government sector. clearly the british library and a few national & regional museums & archives fall into this, but much of the rest of the glam sector does not. artifical intelligence for libraries, archives & museums (ai lam) a multinational collaboration between several large libraries, archives and museums with a specific focus on the artificial intelligence (ai) subset of data science uk reproducibility network a network of researchers, primarily in heis, with an interest in improving the transparency and reliability of academic research. mostly science-focused but with some overlap of goals around ethical and robust use of data. museums computer group i’m less familiar with this than the others, but it seems to have a wider focus on technology generally, within the slightly narrower scope of museums specifically. again, a lot of potential for collaboration. training several organisations and looser groups exist specifically to develop and deliver training that will be relevant to members of this network. the network also presents an opportunity for those who have done a workshop with one of these and want to know what the “next steps” are to continue their data science journey. the carpentries, aka: library carpentry data carpentry software carpentry data science training for librarians (dst l) the programming historian cdh cultural heritage data school supporters these misson-driven organisations have goals that align well with what i imagine for the glam dsn, but operate at a more strategic level. they work by providing expert guidance and policy advice, lobbying and supporting specific projects with funding and/or effort. in particular, the ssi runs a fellowship programme which is currently providing a small amount of funding to this project. digital preservation coalition (dpc) software sustainability institute (ssi) research data alliance (rda) alliance of digital humanities organizations (adho) … and its libraries and digital humanities special interest group (lib&dh sig) professional bodies these organisations exist to promote the interests of professionals in particular fields, including supporting professional development. i hope they will provide communication channels to their various members at the least, and may be interested in supporting more directly, depending on their mission and goals. society of research software engineering chartered institute of library and information professionals archives & records association museums association conclusion as i mentioned at the top of the page, this list cannot possibly be complete. this is a growing area and i’m not the only or first person to have this idea. if you can think of anything glaring that i’ve missed and you think should be on this list, leave a comment or tweet/toot at me! a new font for the blog i’ve updated my blog theme to use the quasi-proportional fonts iosevka aile and iosevka etoile. i really like the aesthetic, as they look like fixed-width console fonts (i use the true fixed-width version of iosevka in my terminal and text editor) but they’re actually proportional which makes them easier to read. https://typeof.net/iosevka/ training a model to recognise my own handwriting if i’m going to train an algorithm to read my weird & awful writing, i’m going to need a decent-sized training set to work with. and since one of the main things i want to do with it is to blog “by hand” it makes sense to focus on that type of material for training. in other words, i need to write out a bunch of blog posts on paper, scan them and transcribe them as ground truth. the added bonus of this plan is that after transcribing, i also end up with some digital text i can use as an actual post — multitasking! so, by the time you read this, i will have already run it through a manual transcription process using transkribus to add it to my training set, and copy-pasted it into emacs for posting. this is a fun little project because it means i can: write more by hand with one of my several nice fountain pens, which i enjoy learn more about the operational process some of my colleagues go through when digitising manuscripts learn more about the underlying technology & maths, and how to tune the process produce more lovely content! for you to read! yay! write in a way that forces me to put off editing until after a first draft is done and focus more on getting the whole of what i want to say down. that’s it for now — i’ll keep you posted as the project unfolds. addendum tee hee! i’m actually just enjoying the process of writing stuff by hand in long-form prose. it’ll be interesting to see how the accuracy turns out and if i need to be more careful about neatness. will it be better or worse than the big but generic models used by samsung notes or onenote. maybe i should include some stylus-written text for comparison. blogging by hand i wrote the following text on my tablet with a stylus, which was an interesting experience: so, thinking about ways to make writing fun again, what if i were to write some of them by hand? i mean i have a tablet with a pretty nice stylus, so maybe handwriting recognition could work. one major problem, of course, is that my handwriting is awful! i guess i’ll just have to see whether the ocr is good enough to cope… it’s something i’ve been thinking about recently anyway: i enjoy writing with a proper fountain pen, so is there a way that i can have a smooth workflow to digitise handwritten text without just typing it back in by hand? that would probably be preferable to this, which actually seems to work quite well but does lead to my hand tensing up to properly control the stylus on the almost-frictionless glass screen. i’m surprised how well it worked! here’s a sample of the original text: and here’s the result of converting that to text with the built-in handwriting recognition in samsung notes: writing blog posts by hand so, thinking about ways to make writing fun again, what if i were to write some of chum by hand? i mean, i have a toldest winds a pretty nice stylus, so maybe handwriting recognition could work. one major problems, ofcourse, is that my , is awful! iguess i’ll just have to see whattime the ocu is good enough to cope… it’s something i’ve hun tthinking about recently anyway: i enjoy wilting with a proper fountain pion, soischeme a way that i can have a smooch workflow to digitise handwritten text without just typing it back in by hand? that wouldprobally be preferableto this, which actually scams to work quito wall but doers load to my hand tensing up to properly couldthe stylus once almost-frictionlessg lass scream. it’s pretty good! it did require a fair bit of editing though, and i reckon we can do better with a model that’s properly trained on a large enough sample of my own handwriting. what i want from a glam/cultural heritage data science network introduction as i mentioned last year, i was awarded a software sustainability institute fellowship to pursue the project of setting up a cultural heritage/glam data science network. obviously, the global pandemic has forced a re-think of many plans and this is no exception, so i’m coming back to reflect on it and make sure i’m clear about the core goals so that everything else still moves in the right direction. one of the main reasons i have for setting up a glam data science network is because it’s something i want. the advice to “scratch your own itch” is often given to people looking for an open project to start or contribute to, and the lack of a community of people with whom to learn & share ideas and practice is something that itches for me very much. the “motivation” section in my original draft project brief for this work said: cultural heritage work, like all knowledge work, is increasingly data-based, or at least gives opportunities to make use of data day-to-day. the proper skills to use this data enable more effective working. knowledge and experience thus gained improves understanding of and empathy with users also using such skills. but of course, i have my own reasons for wanting to do this too. in particular, i want to: advocate for the value of ethical, sustainable data science across a wide range of roles within the british library and the wider sector advance the sector to make the best use of data and digital sources in the most ethical and sustainable way possible understand how and why people use data from the british library, and plan/deliver better services to support that keep up to date with relevant developments in data science learn from others' skills and experiences, and share my own in turn those initial goals imply some further supporting goals: build up the confidence of colleagues who might benefit from data science skills but don’t feel they are “technical” or “computer literate” enough further to that, build up a base of colleagues with the confidence to share their skills & knowledge with others, whether through teaching, giving talks, writing or other channels identify common awareness gaps (skills/knowledge that people don’t know they’re missing) and address them develop a communal space (primarily online) in which people feel safe to ask questions develop a body of professional practice and help colleagues to learn and contribute to the evolution of this, including practices of data ethics, software engineering, statistics, high performance computing, … break down language barriers between data scientists and others i’ll expand on this separately as my planning develops, but here are a few specific activities that i’d like to be able to do to support this: organise less-formal learning and sharing events to complement the more formal training already available within organisations and the wider sector, including “show and tell” sessions, panel discussions, code cafés, masterclasses, guest speakers, reading/study groups, co-working sessions, … organise training to cover intermediate skills and knowledge currently missing from the available options, including the awareness gaps and professional practice mentioned above collect together links to other relevant resources to support self-led learning decisions to be made there are all sorts of open questions in my head about this right now, but here are some of the key ones. is it glam or cultural heritage? when i first started planning this whole thing, i went with “cultural heritage”, since i was pretty transparently targeting my own organisation. the british library is fairly unequivocally a ch organisation. but as i’ve gone along i’ve found myself gravitating more towards the term “glam” (which stands for galleries, libraries, archives, museums) as it covers a similar range of work but is clearer (when you spell out the acronym) about what kinds of work are included. what skills are relevant? this turns out to be surprisingly important, at least in terms of how the community is described, as they define the boundaries of the community and can be the difference between someone feeling welcome or excluded. for example, i think that some introductory statistics training would be immensely valuable for anyone working with data to understand what options are open to them and what limitations those options have, but is the word “statistics” offputting per se to those who’ve chosen a career in arts & humanities? i don’t know because i don’t have that background and perspective. keep it internal to the bl, or open up early on? i originally planned to focus primarily on my own organisation to start with, feeling that it would be easier to organise events and build a network within a single organisation. however, the pandemic has changed my thinking significantly. firstly, it’s now impossible to organise in-person events and that will continue for quite some time to come, so there is less need to focus on the logistics of getting people into the same room. secondly, people within the sector are much more used to attending remote events, which can easily be opened up to multiple organisations in many countries, timezones allowing. it now makes more sense to focus primarily on online activities, which opens up the possibility of building a critical mass of active participants much more quickly by opening up to the wider sector. conclusion this is the type of post that i could let run and run without ever actually publishing, but since it’s something i need feedback and opinions on from other people, i’d better ship it! i really want to know what you think about this, whether you feel it’s relevant to you and what would make it useful. comments are open below, or you can contact me via mastodon or twitter. writing about not writing under construction grunge sign by nicolas raymond — cc by . every year, around this time of year, i start doing two things. first, i start thinking i could really start to understand monads and write more than toy programs in haskell. this is unlikely to ever actually happen unless and until i get a day job where i can justify writing useful programs in haskell, but advent of code always gets me thinking otherwise. second, i start mentally writing this same post. you know, the one about how the blogger in question hasn’t had much time to write but will be back soon? “sorry i haven’t written much lately…” it’s about as cliché as a geocities site with a permanent “under construction” gif. at some point, not long after the dawn of ~time~ the internet, most people realised that every website was permanently under construction and publishing something not ready to be published was just pointless. so i figured this year i’d actually finish writing it and publish it. after all, what’s the worst that could happen? if we’re getting all reflective about this, i could probably suggest some reasons why i’m not writing much: for a start, there’s a lot going on in both my world and the world right now, which doesn’t leave a lot of spare energy after getting up, eating, housework, working and a few other necessary activities. as a result, i’m easily distracted and i tend to let myself get dragged off in other directions before i even get to writing much of anything. if i do manage to focus on this blog in general, i’ll often end up working on some minor tweak to the theme or functionality. i mean, right now i’m wondering if i can do something clever in my text-editor (emacs, since you’re asking) to streamline my writing & editing process so it’s more elegant, efficient, ergonomic and slightly closer to perfect in every way. it also makes me much more likely to self-censor, and to indulge my perfectionist tendencies to try and tweak the writing until it’s absolutely perfect, which of course never happens. i’ve got a whole heap of partly-written posts that are juuuust waiting for the right motivation for me to just finish them off. the only real solution is to accept that: i’m not going to write much and that’s probably ok what i do write won’t always be the work of carefully-researched, finely crafted genius that i want it to be, and that’s probably ok too also to remember why i started writing and publishing stuff in the first place: to reflect and get my thoughts out onto a (virtual) page so that i can see them, figure out whether i agree with myself and learn; and to stimulate discussion and get other views on my (possibly uninformed, incorrect or half-formed) thoughts, also to learn. in other words, a thing i do for me. it’s easy to forget that and worry too much about whether anyone else wants to read my s—t. will you notice any changes? maybe? maybe not? who knows. but it’s a new year and that’s as good a time for a change as any. when is a persistent identifier not persistent? or an identifier? i wrote a post on the problems with isbns as persistent identifiers (pids) for work, so check it out if that sounds interesting. idcc reflections i’m just back from idcc , so here are a few reflections on this year’s conference. you can find all the available slides and links to shared notes on the conference programme. there’s also a list of all the posters and an overview of the unconference skills for curation of diverse datasets here in the uk and elsewhere, you’re unlikely to find many institutions claiming to apply a deep level of curation to every dataset/software package/etc deposited with them. there are so many different kinds of data and so few people in any one institution doing “curation” that it’s impossible to do this for everything. absent the knowledge and skills required to fully evaluate an object the best that can be done is usually to make a sense check on the metadata and flag up with the depositor potential for high-level issues such as accidental disclosure of sensitive personal information. the data curation network in the united states is aiming to address this issue by pooling expertise across multiple organisations. the pilot has been highly successful and they’re now looking to obtain funding to continue this work. the swedish national data service is experimenting with a similar model, also with a lot of success. as well as sharing individual expertise, the dcn collaboration has also produced some excellent online quick-reference guides for curating common types of data. we had some further discussion as part of the unconference on the final day about what it would look like to introduce this model in the uk. there was general agreement that this was a good idea and a way to make optimal use of sparse resources. there were also very valid concerns that it would be difficult in the current financial climate for anyone to justify doing work for another organisation, apparently for free. in my mind there are two ways around this, which are not mutually exclusive by any stretch of the imagination. first is to just do it: form an informal network of curators around something simple like a mailing list, and give it a try. second is for one or more trusted organisations to provide some coordination and structure. there are several candidates for this including dcc, jisc, dpc and the british library; we all have complementary strengths in this area so it’s my hope that we’ll be able to collaborate around it. in the meantime, i hope the discussion continues. artificial intelligence, machine learning et al as you might expect at any tech-oriented conference there was a strong theme of ai running through many presentations, starting from the very first keynote from francine berman. her talk, the internet of things: utopia or dystopia? used self-driving cars as a case study to unpack some of the ethical and privacy implications of ai. for example, driverless cars can potentially increase efficiency, both through route-planning and driving technique, but also by allowing fewer vehicles to be shared by more people. however, a shared vehicle is not a private space in the way your own car is: anything you say or do while in that space is potentially open to surveillance. aside from this, there are some interesting ideas being discussed, particularly around the possibility of using machine learning to automate increasingly complex actions and workflows such as data curation and metadata enhancement. i didn’t get the impression anyone is doing this in the real world yet, but i’ve previously seen theoretical concepts discussed at idcc make it into practice so watch this space! playing games! training is always a major idcc theme, and this year two of the most popular conference submissions described games used to help teach digital curation concepts and skills. mary donaldson and matt mahon of the university of glasgow presented their use of lego to teach the concept of sufficient metadata. participants build simple models before documenting the process and breaking them down again. then everyone had to use someone else’s documentation to try and recreate the models, learning important lessons about assumptions and including sufficient detail. kirsty merrett and zosia beckles from the university of bristol brought along their card game “researchers, impact and publications (rip)”, based on the popular “cards against humanity”. rip encourages players to examine some of the reasons for and against data sharing with plenty of humour thrown in. both games were trialled by many of the attendees during thursday’s unconference. summary i realised in dublin that it’s years since i attended my first idcc, held at the university of bristol in december while i was still working at the nearby university of bath. while i haven’t been every year, i’ve been to every one held in europe since then and it’s interesting to see what has and hasn’t changed. we’re no longer discussing data management plans, data scientists or various other things as abstract concepts that we’d like to encourage, but dealing with the real-world consequences of them. the conference has also grown over the years: this year was the biggest yet, boasting over attendees. there has been especially big growth in attendees from north america, australasia, africa and the middle east. that’s great for the diversity of the conference as it brings in more voices and viewpoints than ever. with more people around to interact with i have to work harder to manage my energy levels but i think that’s a small price to pay. iosevka: a nice fixed-width-font iosevka is a nice, slender monospace font with a lot of configurable variations. check it out: https://typeof.net/iosevka/ replacing comments with webmentions just a quickie to say that i’ve replaced the comment section at the bottom of each post with webmentions, which allows you to comment by posting on your own site and linking here. it’s a fundamental part of the indieweb, which i’m slowly getting to grips with having been a halfway member of it for years by virtue of having my own site on my own domain. i’d already got rid of google analytics to stop forcing that tracking on my visitors, i wanted to get rid of disqus too because i’m pretty sure the only way that is free for me is if they’re selling my data and yours to third parties. webmention is a nice alternative because it relies only on open standards, has no tracking and allows people to control their own comments. while i’m currently using a third-party service to help, i can switch to self-hosted at any point in the future, completely transparently. thanks to webmention.io, which handles incoming webmentions for me, and webmention.js, which displays them on the site, i can keep it all static and not have to implement any of this myself, which is nice. it’s a bit harder to comment because you have to be able to host your own content somewhere, but then almost no-one ever commented anyway, so it’s not like i’ll lose anything! plus, if i get bridgy set up right, you should be able to comment just by replying on mastodon, twitter or a few other places. a spot of web searching shows that i’m not the first to make the disqus -> webmentions switch (yes, i’m putting these links in blatantly to test outgoing webmentions with telegraph…): so long disqus, hello webmention — nicholas hoizey bye disqus, hello webmention! — evert pot implementing webmention on a static site — deluvi let’s see how this goes! bridging carpentries slack channels to matrix it looks like i’ve accidentally taken charge of bridging a bunch of the carpentries slack channels over to matrix. given this, it seems like a good idea to explain what that sentence means and reflect a little on my reasoning. i’m more than happy to discuss the pros and cons of this approach if you just want to try chatting in matrix, jump to the getting started section what are slack and matrix? slack (see also on wikipedia), for those not familiar with it, is an online text chat platform with the feel of irc (internet relay chat), a modern look and feel and both web and smartphone interfaces. by providing a free tier that meets many peoples' needs on its own slack has become the communication platform of choice for thousands of online communities, private projects and more. one of the major disadvantages of using slack’s free tier, as many community organisations do, is that as an incentive to upgrade to a paid service your chat history is limited to the most recent , messages across all channels. for a busy community like the carpentries, this means that messages older than about - weeks are already inaccessible, rendering some of the quieter channels apparently empty. as slack is at pains to point out, that history isn’t gone, just archived and hidden from view unless you pay the low, low price of $ /user/month. that doesn’t seem too pricy, unless you’re a non-profit organisation with a lot of projects you want to fund and an active membership of several hundred worldwide, at which point it soon adds up. slack does offer to waive the cost for registered non-profit organisations, but only for one community. the carpentries is not an independent organisation, but one fiscally sponsored by community initiatives, which has already used its free quota of one elsewhere rendering the carpentries ineligible. other umbrella organisations such as numfocus (and, i expect, mozilla) also run into this problem with slack. so, we have a community which is slowly and inexorably losing its own history behind a paywall. for some people this is simply annoying, but from my perspective as a facilitator of the preservation of digital things the community is haemhorraging an important record of its early history. enter matrix. matrix is a chat platform similar to irc, slack or discord. it’s divided into separate channels, and users can join one or more of these to take part in the conversation happening in those channels. what sets it apart from older technology like irc and walled gardens like slack & discord is that it’s federated. federation means simply that users on any server can communicate with users and channels on any other server. usernames and channel addresses specify both the individual identifier and the server it calls home, just as your email address contains all the information needed for my email server to route messages to it. while users are currently tied to their home server, channels can be mirrored and synchronised across multiple servers making the overall system much more resilient. can’t connect to your favourite channel on server x? no problem: just connect via its alias on server y and when x comes back online it will be resynchronised. the technology used is much more modern and secure than the aging irc protocol, and there’s no vender lock-in like there is with closed platforms like slack and discord. on top of that, matrix channels can easily be “bridged” to channels/rooms on other platforms, including, yes, slack, so that you can join on matrix and transparently talk to people connected to the bridged room, or vice versa. so, to summarise: the current carpentries slack channels could be bridged to matrix at no cost and with no disruption to existing users the history of those channels from that point on would be retained on matrix.org and accessible even when it’s no longer available on slack if at some point in the future the carpentries chose to invest in its own matrix server, it could adopt and become the main matrix home of these channels without disruption to users of either matrix or (if it’s still in use at that point) slack matrix is an open protocol, with a reference server implementation and wide range of clients all available as free software, which aligns with the values of the carpentries community on top of this: i’m fed up of having so many different slack teams to switch between to see the channels in all of them, and prefer having all the channels i regularly visit in a single unified interface; i wanted to see how easy this would be and whether others would also be interested. given all this, i thought i’d go ahead and give it a try to see if it made things more manageable for me and to see what the reaction would be from the community. how can i get started? !!! reminder please remember that, like any other carpentries space, the code of conduct applies in all of these channels. first, sign up for a matrix account. the quickest way to do this is on the matrix “try now” page, which will take you to the riot web client which for many is synonymous with matrix. other clients are also available for the adventurous. second, join one of the channels. the links below will take you to a page that will let you connect via your preferred client. you’ll need to log in as they are set not to allow guest access, but, unlike slack, you won’t need an invitation to be able to join. #general — the main open channel to discuss all things carpentries #random — anything that would be considered offtopic elsewhere #welcome — join in and introduce yourself! that’s all there is to getting started with matrix. to find all the bridged channels there’s a matrix “community” that i’ve added them all to: carpentries matrix community. there’s a lot more, including how to bridge your favourite channels from slack to matrix, but this is all i’ve got time and space for here! if you want to know more, leave a comment below, or send me a message on slack (jezcope) or maybe matrix (@petrichor:matrix.org)! i’ve also made a separate channel for matrix-slack discussions: #matrix on slack and carpentries matrix discussion on matrix mozfest first reflections discussions of neurodiversity at #mozfest photo by jennifer riggins the other weekend i had my first experience of mozilla festival, aka #mozfest. it was pretty awesome. i met quite a few people in real life that i’ve previously only known (/stalked) on twitter, and caught up with others that i haven’t seen for a while. i had the honour of co-facilitating a workshop session on imposter syndrome and how to deal with it with the wonderful yo yehudi and emmy tsang. we all learned a lot and hope our participants did too; we’ll be putting together a summary blog post as soon as we can get our act together! i also attended a great session, led by kiran oliver (psst, they’re looking for a new challenge), on how to encourage and support a neurodiverse workforce. i was only there for the one day, and i really wish that i’d taken the plunge and committed to the whole weekend. there’s always next year though! to be honest, i’m just disappointed that i never had the courage to go sooner, music for working today the office conversation turned to blocking out background noise. (no, the irony is not lost on me.) like many people i work in a large, open-plan office, and i’m not alone amongst my colleagues in sometimes needing to find a way to boost concentration by blocking out distractions. not everyone is like this, but i find music does the trick for me. i also find that different types of music are better for different types of work, and i use this to try and manage my energy better. there are more distractions than auditory noise, and at times i really struggle with visual noise. rather than have this post turn into a rant about the evils of open-plan offices, i’ll just mention that the scientific evidence doesn’t paint them in a good light , or at least suggests that the benefits are more limited in scope than is commonly thought , and move on to what i actually wanted to share: good music for working to. there are a number of genres that i find useful for working. generally, these have in common a consistent tempo, a lack of lyrics, and enough variation to prevent boredom without distracting. familiarity helps my concentration too so i’ll often listen to a restricted set of albums for a while, gradually moving on by dropping one out and bringing in another. in my case this includes: traditional dance music, generally from northern and western european traditions for me. this music has to be rhythmically consistent to allow social dancing, and while the melodies are typically simple repeated phrases, skilled musicians improvise around that to make something beautiful. i tend to go through phases of listening to particular traditions; i’m currently listening to a lot of french, belgian and scandinavian. computer game soundtracks, which are specifically designed to enhance gameplay without distracting, making them perfect for other activities requiring a similar level of concentration. chiptunes and other music incorporating it; partly overlapping with the previous category, chiptunes is music made by hacking the audio chips from (usually) old computers and games machines to become an instrument for new music. because of the nature of the instrument, this will have millisecond-perfect rhythm and again makes for undistracting noise blocking with an extra helping of nostalgia! purists would disagree with me, but i like artists that combine chiptunes with other instruments and effects to make something more complete-sounding. retrowave/synthwave/outrun, synth-driven music that’s instantly familiar as the soundtrack to many s sci-fi and thriller movies. atmospheric, almost dreamy, but rhythmic with a driving beat, it’s another genre that fits into the “pleasing but not too surprising” category for me. so where to find this stuff? one of the best resources i’ve found is music for programming which provides carefully curated playlists of mostly electronic music designed to energise without distracting. they’re so well done that the tracks move seamlessly, one to the next, without ever getting boring. spotify is an obvious option, and i do use it quite a lot. however, i’ve started trying to find ways to support artists more directly, and bandcamp seems to be a good way of doing that. it’s really easy to browse by genre, or discover artists similar to what you’re currently hearing. you can listen for free as long as you don’t mind occasional nags to buy the music you’re hearing, but you can also buy tracks or albums. music you’ve paid for is downloadable in several open, drm-free formats for you to keep, and you know that a decent chunk of that cash is going directly to that artist. i also love noise generators; not exactly music, but a variety of pleasant background noises, some of which nicely obscure typical office noise. i particularly like mynoise.net, which has a cornucopia of different natural and synthetic noises. each generator comes with a range of sliders allowing you to tweak the composition and frequency range, and will even animate them randomly for you to create a gently shifting soundscape. a much simpler, but still great, option is noisli with it’s nice clean interface. both offer apps for ios and android. for bonus points, you can always try combining one or more of the above. adding in a noise generator allows me to listen to quieter music while still getting good environmental isolation when i need concentration. another favourite combo is to open both the cafe and rainfall generators from mynoise, made easier by the ability to pop out a mini-player then open up a second generator. i must be missing stuff though. what other musical genres should i try? what background sounds are nice to work to? well, you know. the other day. whatever. &#x a ;þ e; see e.g.: lee, so young, and jay l. brand. ‘effects of control over office workspace on perceptions of the work environment and work outcomes’. journal of environmental psychology , no. ( september ): – . https://doi.org/ . /j.jenvp. . . . &#x a ;þ e; open plan offices can actually work under certain conditions, the conversation &#x a ;þ e; working at the british library: months in it barely seems like it, but i’ve been at the british library now for nearly months. it always takes a long time to adjust and from experience i know it’ll be another year before i feel fully settled, but my team, department and other colleagues have really made me feel welcome and like i belong. one thing that hasn’t got old yet is the occasional thrill of remembering that i work at my national library now. every now and then i’ll catch a glimpse of the collections at boston spa or step into one of the reading rooms and think “wow, i actually work here!” i also like having a national and international role to play, which means i get to travel a bit more than i used to. budgets are still tight so there are limits, and i still prefer to be home more often than not, but there is more scope in this job than i’ve had previously for travelling to conferences, giving talks that change the way people think, and learning in different contexts. i’m learning a lot too, especially how to work with and manage people split across multiple sites, and the care and feeding of budgets. as well as missing mo old team at sheffield, i do also miss some of the direct contact i had with researchers in he. i especially miss the teaching work, but also the higher-level influencing of more senior academics to change practices on a wider scale. still, i get to use those influencing skills in different ways now, and i’m still involved with the carpentries which should let me keep my hand in with teaching. i still deal with my general tendency to try and do all the things, and as before i’m slowly learning to recognise it, tame it and very occasionally turn it to my advantage. that also leads to feelings of imposterism that are only magnified by the knowledge that i now work at a national institution! it’s a constant struggle some days to believe that i’ve actually earned my place here through hard work, even if i don’t always feel that i have, my colleagues here certainly have, so i should have more faith in their opinion of me. finally, i couldn’t write this type of thing without mentioning the commute. i’ve gone from minutes each way on a good day (up to twice that if the trains were disrupted) to minutes each way along fairly open roads. i have less time to read, but much more time at home. on top of that, the library has implemented flexitime across all pay grades, with even senior managers strongly encouraged to make full use. not only is this an important enabler of equality across the organisation, it relieves for me personally the pressure to work over my contracted hours and the guilt i’ve always felt at leaving work even minutes early. if i work late, it’s now a choice i’m making based on business needs instead of guilt and in full knowledge that i’ll get that time back later. so that’s where i am right now. i’m really enjoying the work and the culture, and i look forward to what the next months will bring! rda plenary reflection photo by me i sit here writing this in the departure lounge at philadelphia international airport, waiting for my aer lingus flight back after a week at the th research data alliance (rda) plenary (although i’m actually publishing this a week or so later at home). i’m pretty exhausted, partly because of the jet lag, and partly because it’s been a very full week with so much to take in. it’s my first time at an rda plenary, and it was quite a new experience for me! first off, it’s my first time outside europe, and thus my first time crossing quite so many timezones. i’ve been waking at am and ready to drop by pm, but i’ve struggled on through! secondly, it’s the biggest conference i’ve been to for a long time, both in number of attendees and number of parallel sessions. there’s been a lot of sustained input so i’ve been very glad to have a room in the conference hotel and be able to escape for a few minutes when i needed to recharge. thirdly, it’s not really like any other conference i’ve been to: rather than having large numbers of presentations submitted by attendees, each session comprises lots of parallel meetings of rda interest groups and working groups. it’s more community-oriented: an opportunity for groups to get together face to face and make plans or show off results. i found it pretty intense and struggled to take it all in, but incredibly valuable nonetheless. lots of information to process (i took a lot of notes) and a few contacts to follow up on too, so overall i loved it! using pipfile in binder photo by sear greyson on unsplash i recently attended a workshop, organised by the excellent team of the turing way project, on a tool called binderhub. binderhub, along with public hosting platform mybinder, allows you to publish computational notebooks online as “binders” such that they’re not static but fully interactive. it’s able to do this by using a tool called repo docker to capture the full computational environment and dependencies required to run the notebook. !!! aside “what is the turing way?” the turing way is, in its own words, “a lightly opinionated guide to reproducible data science.” the team is building an open textbook and running a number of workshops for scientists and research software engineers, and you should check out the project on github. you could even contribute! the binder process goes roughly like this: do some work in a jupyter notebook or similar put it into a public git repository add some extra metadata describing the packages and versions your code relies on go to mybinder.org and tell it where to find your repository open the url it generates for you profit other than step , which can take some time to build the binder, this is a remarkably quick process. it supports a number of different languages too, including built-in support for r, python and julia and the ability to configure pretty much any other language that will run on linux. however, the python support currently requires you to have either a requirements.txt or conda-style environment.yml file to specify dependencies, and i commonly use a pipfile for this instead. pipfile allows you to specify a loose range of compatible versions for maximal convenience, but then locks in specific versions for maximal reproducibility. you can upgrade packages any time you want, but you’re fully in control of when that happens, and the locked versions are checked into version control so that everyone working on a project gets consistency. since pipfile is emerging as something of a standard thought i’d see if i could use that in a binder, and it turns out to be remarkably simple. the reference implementation of pipfile is a tool called pipenv by the prolific kenneth reitz. all you need to use this in your binder is two files of one line each. requirements.txt tells repo binder to build a python-based binder, and contains a single line to install the pipenv package: pipenv then postbuild is used by repo binder to install all other dependencies using pipenv: pipenv install --system the --system flag tells pipenv to install packages globally (its default behaviour is to create a python virtualenv). with these two files, the binder builds and runs as expected. you can see a complete example that i put together during the workshop here on gitlab. what do you think i should write about? i’ve found it increasingly difficult to make time to blog, and it’s not so much not having the time — i’m pretty privileged in that regard — but finding the motivation. thinking about what used to motivate me, one of the big things was writing things that other people wanted to read. rather than try to guess, i thought i’d ask! those who know what i&# ;m about, what would you read about, if it was written by me?i&# ;m trying to break through the blog-writers block and would love to know what other people would like to see my ill-considered opinions on.— jez cope (@jezcope) march , i’m still looking for ideas, so please tweet me or leave me a comment below. below are a few thoughts that i’m planning to do something with. something taking one of the more techy aspects of open research, breaking it down and explaining the benefits for non-techy folks?— dr beth 🏳️‍🌈 🐺 (@phdgeek) march , skills (both techy and non techy) that people need to most effectively support rdm— kate o&# ;neill (@katefoneill) march , sometimes i forget that my background makes me well-qualified to take some of these technical aspects of the job and break them down for different audiences. there might be a whole series in this… carrying on our conversation last week i&# ;d love to hear more about how you&# ;ve found moving from an he lib to a national library and how you see the bl&# ;s role in rdm. appreciate this might be a bit niche/me looking for more interesting things to cite :)— rosie higman (@rosiehlib) march , this is interesting, and something i’d like to reflect on; moving from one job to another always has lessons and it’s easy to miss them if you’re not paying attention. another one for the pile. life without admin rights to your computer— mike croucher (@walkingrandomly) march , this is so frustrating as an end user, but at the same time i get that endpoint security is difficult and there are massive risks associated with letting end users have admin rights. this is particularly important at the bl: as custodian’s of a nation’s cultural heritage, the risk for us is bigger than for many and for this reason we are now cyber essentials plus certified. at some point i’d like to do some research and have a conversation with someone who knows a lot more about infosec to work out what the proper approach to this, maybe involving vms and a demilitarized zone on the network. i’m always looking for more inspiration, so please leave a comment if you’ve got anything you’d like to read my thoughts on. if you’re not familiar with my writing, please take a minute or two to explore the blog; the tags page is probably a good place to get an overview. ultimate hacking keyboard: first thoughts following on from the excitement of having built a functioning keyboard myself, i got a parcel on monday. inside was something that i’ve been waiting for since september: an ultimate hacking keyboard! where the custom-built laplace is small and quiet for travelling, the uhk is to be my main workhorse in the study at home. here are my first impressions: key switches i went with kailh blue switches from the available options. in stark contrast to the quiet blacks on the laplace, blues are noisy! they have an extra piece of plastic inside the switch that causes an audible and tactile click when the switch activates. this makes them very satisfying to type on and should help as i train my fingers not to bottom out while typing, but does make them unsuitable for use in a shared office! here are some animations showing how the main types of key switch vary. layout this keyboard has what’s known as a % layout: no number pad, arrows or function keys. as with the more spartan laplace, these “missing” keys are made up for with programmable layers. for example, the arrow keys are on the mod layer on the i/j/k/l keys, so i can access them without moving from the home row. i actually find this preferable to having to move my hand to the right to reach them, and i really never used the number pad in any case. split this is a split keyboard, which means that the left and right halves can be separated to place the hands further apart which eases strain across the shoulders. the uhk has a neat coiled cable joining the two which doesn’t get in the way. a cool design feature is that the two halves can be slotted back together and function perfectly well as a non-split keyboard too, held together by magnets. there are even electrical contacts so that when the two are joined you don’t need the linking cable. programming the board is fully programmable, and this is achieved via a custom (open source) gui tool which talks to the (open source) firmware on the board. you can have multiple keymaps, each of which has a separate base, mod, fn and mouse layer, and there’s an led display that shows a short mnemonic for the currently active map. i already have a customised dvorak layout for day-to-day use, plus a standard qwerty for not-me to use and an alternative qwerty which will be slowly tweaked for games that don’t work well with dvorak. mouse keys one cool feature that the designers have included in the firmware is the ability to emulate a mouse. there’s a separate layer that allows me to move the cursor, scroll and click without moving my hands from the keyboard. palm rests not much to say about the palm rests, other than they are solid wood, and chunky, and really add a little something. i have to say, i really like it so far! overall it feels really well designed, with every little detail carefully thought out and excellent build quality and a really solid feeling. custom-built keyboard i’m typing this post on a keyboard i made myself, and i’m rather excited about it! why make my own keyboard? i wanted to learn a little bit about practical electronics, and i like to learn by doing i wanted to have the feeling of making something useful with my own hands i actually need a small, keyboard with good-quality switches now that i travel a fair bit for work and this lets me completely customise it to my needs just because! while it is possible to make a keyboard completely from scratch, it makes much more sense to put together some premade parts. the parts you need are: pcb (printed circuit board): the backbone of the keyboard, to which all the other electrical components attach, this defines the possible physical locations for each key switches: one for each key to complete a circuit whenever you press it keycaps: switches are pretty ugly and pretty uncomfortable to press, so each one gets a cap; these are what you probably think of as the “keys” on your keyboard and come in almost limitless variety of designs (within the obvious size limitation) and are the easiest bit of personalisation controller: the clever bit, which detects open and closed switches on the pcb and tells your computer what keys you pressed via a usb cable firmware: the program that runs on the controller starts off as source code like any other program, and altering this can make the keyboard behave in loads of different ways, from different layouts to multiple layers accessed by holding a particular key, to macros and even emulating a mouse! in my case, i’ve gone for the following: pcb laplace from keeb.io, a very compact -key (“ %") board, with no number pad, function keys or number row, but a lot of flexibility for key placement on the bottom row. one of my key design goals was small size so i can just pop it in my bag and have on my lap on the train. controller elite-c, designed specifically for keyboard builds to be physically compatible with the cheaper pro micro, with a more-robust usb port (the pro micro’s has a tendency to snap off), and made easier to program with a built-in reset button and better bootloader. switches gateron black: gateron is one of a number of manufacturers of mechanical switches compatible with the popular cherry range. the black switch is linear (no click or bump at the activation point) and slightly heavier sprung than the more common red. cherry also make a black switch but the gateron version is slightly lighter and having tested a few i found them smoother too. my key goal here was to reduce noise, as the stronger spring will help me type accurately without hitting the bottom of the keystroke with an audible sound. keycaps blank grey pbt in dsa profile: this keyboard layout has a lot of non-standard sized keys, so blank keycaps meant that i wouldn’t be putting lots of keys out of their usual position; they’re also relatively cheap, fairly classy imho and a good placeholder until i end up getting some really cool caps on a group buy or something; oh, and it minimises the chance of someone else trying the keyboard and getting freaked out by the layout… firmware qmk (quantum mechanical keyboard), with a work-in-progress layout, based on dvorak. qmk has a lot of features and allows you to fully program each and every key, with multiple layers accessed through several different routes. because there are so few keys on this board, i’ll need to make good use of layers to make all the keys on a usual keyboard available. dvorak simplified keyboard i’m grateful to the folks of the leeds hack space, especially nav & mark who patiently coached me in various soldering techniques and good practice, but also everyone else who were so friendly and welcoming and interested in my project. i’m really pleased with the result, which is small, light and fully customisable. playing with qmk firmware features will keep me occupied for quite a while! this isn’t the end though, as i’ll need a case to keep the dust out. i’m hoping to be able to d print this or mill it from wood with a cnc mill, for which i’ll need to head back to the hack space! less, but better “wenniger aber besser” — dieter rams {:.big-quote} i can barely believe it’s a full year since i published my intentions for . a lot has happened since then. principally: in november i started a new job as data services lead at the british library. one thing that hasn’t changed is my tendency to try to do too much, so this year i’m going to try and focus on a single intention, a translation of designer dieter rams' famous quote above: less, but better. this chimes with a couple of other things i was toying with over the christmas break, as they’re essentially other ways of saying the same thing: take it steady one thing at a time i’m also going to keep in mind those touchstones from last year: what difference is this making? am i looking after myself? do i have evidence for this? i mainly forget to think about them, so i’ll be sticking up post-its everywhere to help me remember! how to extend python with rust: part python is great, but i find it useful to have an alternative language under my belt for occasions when no amount of pythonic cleverness will make some bit of code run fast enough. one of my main reasons for wanting to learn rust was to have something better than c for that. not only does rust have all sorts of advantages that make it a good choice for code that needs to run fast and correctly, it’s also got a couple of rather nice crates (libraries) that make interfacing with python a lot nicer. here’s a little tutorial to show you how easy it is to call a simple rust function from python. if you want to try it yourself, you’ll find the code on github. !!! prerequisites i’m assuming for this tutorial that you’re already familiar with writing python scripts and importing & using packages, and that you’re comfortable using the command line. you’ll also need to have installed rust. the rust bit the quickest way to get compiled code into python is to use the builtin ctypes package. this is python’s “foreign function interface” or ffi: a means of calling functions outside the language you’re using to make the call. ctypes allows us to call arbitrary functions in a shared library , as long as those functions conform to certain standard c language calling conventions. thankfully, rust tries hard to make it easy for us to build such a shared library. the first thing to do is to create a new project with cargo, the rust build tool: $ cargo new rustfrompy created library `rustfrompy` project $ tree . ├── cargo.toml └── src └── lib.rs directory, files !!! aside i use the fairly common convention that text set in fixed-width font is either example code or commands to type in. for the latter, a $ precedes the command that you type (omit the $), and lines that don’t start with a $ are output from the previous command. i assume a basic familiarity with unix-style command line, but i should probably put in some links to resources if you need to learn more! we need to edit the cargo.toml file and add a [lib] section: [package] name = &# ;rustfrompy&# ; version = &# ; . . &# ; authors = [&# ;jez cope <j.cope@erambler.co.uk>&# ;] [dependencies] [lib] name = &# ;rustfrompy&# ; crate-type = [&# ;cdylib&# ;] this tells cargo that we want to make a c-compatible dynamic library (crate-type = ["cdylib"]) and what to call it, plus some standard metadata. we can then put our code in src/lib.rs. we’ll just use a simple toy function that adds two numbers together: #[no_mangle] pub fn add(a: i , b: i ) -> i { a + b } notice the pub keyword, which instructs the compiler to make this function accessible to other modules, and the #[no_mangle] annotation, which tells it to use the standard c naming conventions for functions. if we don’t do this, then rust will generate a new name for the function for its own nefarious purposes, and as a side effect we won’t know what to call it when we want to use it from python. being good developers, let’s also add a test: #[cfg(test)] mod test { use ::*; #[test] fn test_add() { assert_eq!( , add( , )); } } we can now run cargo test which will compile that code and run the test: $ cargo test compiling rustfrompy v . . (file:///home/jez/personal/projects/rustfrompy) finished dev [unoptimized + debuginfo] target(s) in . secs running target/debug/deps/rustfrompy- caaa f f aa running test test test::test_add ... ok test result: ok. passed; failed; ignored; measured; filtered out everything worked! now just to build that shared library and we can try calling it from python: $ cargo build compiling rustfrompy v . . (file:///home/jez/personal/projects/rustfrompy) finished dev [unoptimized + debuginfo] target(s) in . secs notice that the build is unoptimized and includes debugging information: this is useful in development, but once we’re ready to use our code it will run much faster if we compile it with optimisations. cargo makes this easy: $ cargo build --release compiling rustfrompy v . . (file:///home/jez/personal/projects/rustfrompy) finished release [optimized] target(s) in . secs the python bit after all that, the python bit is pretty short. first we import the ctypes package (which is included in all recent python versions): from ctypes import cdll cargo has tidied our shared library away into a folder, so we need to tell python where to load it from. on linux, it will be called lib<something>.so where the “something” is the crate name from cargo.toml, “rustfrompy”: lib = cdll.loadlibrary(&# ;target/release/librustfrompy.so&# ;) finally we can call the function anywhere we want. here it is in a pytest-style test: def test_rust_add(): assert lib.add( , ) == if you have pytest installed (and you should!) you can run the whole test like this: $ pytest --verbose test.py ====================================== test session starts ====================================== platform linux -- python . . , pytest- . . , py- . . , pluggy- . . -- /home/jez/.virtualenvs/datasci/bin/python cachedir: .cache rootdir: /home/jez/personal/projects/rustfrompy, inifile: collected items test.py::test_rust_add passed it worked! i’ve put both the rust and python code on github if you want to try it for yourself. shortcomings ok, so that was a pretty simple example, and i glossed over a lot of things. for example, what would happen if we did lib.add( . , )? this causes python to throw an error because our rust function only accepts integers ( -bit signed integers, i , to be precise), and we gave it a floating point number. ctypes can’t guess what type(s) a given function will work with, but it can at least tell us when we get it wrong. to fix this properly, we need to do some extra work, telling the ctypes library what the argument and return types for each function are. for a more complex library, there will probably be more housekeeping to do, such as translating return codes from functions into more pythonic-style errors. for a small example like this there isn’t much of a problem, but the bigger your compiled library the more extra boilerplate is required on the python side just to use all the functions. when you’re working with an existing library you don’t have much choice about this, but if you’re building it from scratch specifically to interface with python, there’s a better way using the python c api. you can call this directly in rust, but there are a couple of rust crates that make life much easier, and i’ll be taking a look at those in a future blog post. .so on linux, .dylib on mac and .dll on windows &#x a ;þ e; new years's irresolution photo by andrew hughes on unsplash i’ve chosen not to make any specific resolutions this year; i’ve found that they just don’t work for me. like many people, all i get is a sense of guilt when i inevitably fail to live up to the expectations i set myself at the start of the year. however, i have set a couple of what i’m referring to as “themes” for the year: touchstones that i’ll aim to refer to when setting priorities or just feeling a bit overwhelmed or lacking in direction. they are: contribution self-care measurement i may do some blog posts expanding on these, but in the meantime, i’ve put together a handful of questions to help me think about priorities and get perspective when i’m doing (or avoiding doing) something. what difference is this making? i feel more motivated when i can figure out how i’m contributing to something bigger than myself. in society? in my organisation? to my friends & family? am i looking after myself? i focus a lot on the expectations have (or at least that i think others have) of me, but i can’t do anything well unless i’m generally happy and healthy. is this making me happier and healthier? is this building my capacity to to look after myself, my family & friends and do my job? is this worth the amount of time and energy i’m putting in? do i have evidence for this? i don’t have to base decisions purely on feelings/opinions: i have the skills to obtain, analyse and interpret data. is this fact or opinion? what are the facts? am i overthinking this? can i put a confidence interval for this? build documents from code and data with saga !!! tldr “tl;dr” i’ve made saga, a thing for compiling documents by combining code and data with templates. what is it? saga is a very simple command-line tool that reads in one or more data files, runs one or more scripts, then passes the results into a template to produce a final output document. it enables you to maintain a clean separation between data, logic and presentation and produce data-based documents that can easily be updated. that allows the flow of data through the document to be easily understood, a cornerstone of reproducible analysis. you run it like this: saga build -d data.yaml -d other_data.yaml \ -s analysis.py -t report.md.tmpl \ -o report.md any scripts specified with -s will have access to the data in local variables, and any changes to local variables in a script will be retained when everything is passed to the template for rendering. for debugging, you can also do: saga dump -d data.yaml -d other_data.yaml -s analysis.py which will print out the full environment that would be passed to your template with saga build. features right now this is a really early version. it does the job but i have lots of ideas for features to add if i ever have time. at present it does the following: reads data from one or more yaml files transforms data with one or more python scripts renders a template in mako format works with any plain-text output format, including markdown, latex and html use cases write reproducible reports & papers based on machine-readable data separate presentation from content in any document, e.g. your cv (example coming soon) yours here? get it! i haven’t released this on pypi yet, but all the code is available on github to try out. if you have pipenv installed (and if you use python you should!), you can try it out in an isolated virtual environment by doing: git clone https://github.com/jezcope/sagadoc.git cd sagadoc pipenv install pipenv run saga or you can set up for development and run some tests: pipenv install --dev pipenv run pytest why? like a lot of people, i have to produce reports for work, often containing statistics computed from data. although these generally aren’t academic research papers, i see no reason not to aim for a similar level of reproducibility: after all, if i’m telling other people to do it, i’d better take my own advice! a couple of times now i’ve done this by writing a template that holds the text of the report and placeholders for values, along with a python script that reads in the data, calculates the statistics i want and completes the template. this is valuable for two main reasons: if anyone wants to know how i processed the data and calculated those statistics, it’s all there: no need to try and remember and reproduce a series of button clicks in excel; if the data or calculations change, i just need to update the relevant part and run it again, and all the relevant parts of the document will be updated. this is particularly important if changing a single data value requires recalculation of dozens of tables, charts, etc. it also gives me the potential to factor out and reuse bits of code in the future, add tests and version control everything. now that i’ve done this more than once (and it seems likely i’ll do it again) it makes sense to package that script up in a more portable form so i don’t have to write it over and over again (or, shock horror, copy & paste it!). it saves time, and gives others the possibility to make use of it. prior art i’m not the first person to think of this, but i couldn’t find anything that did exactly what i needed. several tools will let you interweave code and prose, including the results of evaluating each code snippet in the document: chief among these are jupyter and rmarkdown. there are also tools that let you write code in the order that makes most sense to read and then rearrange it into the right order to execute, so-call literate programming. the original tool for this is the venerable noweb. sadly there is very little that combine both of these and allow you to insert the results of various calculations at arbitrary points in a document, independent of the order of either presenting or executing the code. the only two that i’m aware of are: dexy and org-mode. unfortunately, dexy currently only works on legacy python (/python ) and org-mode requires emacs (which is fine but not exactly portable). rmarkdown comes close and supports a range of languages but the full feature set is only available with r. actually, my ideal solution is org-mode without the emacs dependency, because that’s the most flexible solution; maybe one day i’ll have both the time and skill to implement that. it’s also possible i might be able to figure out dexy’s internals to add what i want to it, but until then saga does the job! future work there are lots of features that i’d still like to add when i have time: some actual documentation! and examples! more data formats (e.g. csv, json, toml) more languages (e.g. r, julia) fetching remote data over http caching of intermediate results to speed up rebuilds for now, though, i’d love for you to try it out and let me know what you think! as ever, comment here, tweet me or start an issue on github. why try rust for scientific computing? when you’re writing analysis code, python (or r, or javascript, or …) is usually the right choice. these high-level languages are set up to make you as productive as possible, and common tasks like array manipulation have been well optimised. however, sometimes you just can’t get enough speed and need to turn to a lower-level compiled language. often that will be c, c++ or fortran, but i thought i’d do a short post on why i think you should consider rust. one of my goals for ’s advent of code was to learn a modern, memory-safe, statically-typed language. i now know that there are quite a lot of options in this space, but two seem to stand out: go & rust. i gave both of them a try, and although i’ll probably go back to give go a more thorough test at some point i found i got quite hooked on rust. both languages, though young, are definitely production-ready. servo, the core of the new firefox browser, is entirely written in rust. in fact, mozilla have been trying to rewrite the rendering core in c for nearly a decade, and switching to rust let them get it done in just a couple of years. !!! tldr “tl;dr” - it’s fast: competitive with idiomatic c/c++, and no garbage-collection overhead - it’s harder to write buggy code, and compiler errors are actually helpful - it’s c-compatible: you can call into rust code anywhere you’d call into c, call c/c++ from rust, and incrementally replace c/c++ code with rust - it has sensible modern syntax that makes your code clearer and more concise - support for scientific computing are getting better all the time (matrix algebra libraries, built-in simd, safe concurrency) - it has a really friendly and active community - it’s production-ready: servo, the new rendering core in firefox, is built entirely in rust performance to start with, as a compiled language rust executes much faster than a (pseudo-)interpreted language like python or r; the price you pay for this is time spent compiling during development. however, having a compile step also allows the language to enforce certain guarantees, such as type-correctness and memory safety, which between them prevent whole classes of bugs from even being possible. unlike go (which, like many higher-level languages, uses a garbage collector), rust handles memory safety at compile time through the concepts of ownership and borrowing. these can take some getting used to and were a big source of frustration when i was first figuring out the language, but ultimately contribute to rust’s reliably-fast performance. performance can be unpredictable in a garbage-collected language because you can’t be sure when the gc is going to run and you need to understand it really well to stand a chance of optimising it if becomes a problem. on the other hand, code that has the potential to be unsafe will result in compilation errors in rust. there are a number of benchmarks (example) that show rust’s performance on a par with idiomatic c & c++ code, something that very few languages can boast. helpful error messages because beginner rust programmers often get compile errors, it’s really important that those errors are easy to interpret and fix, and rust is great at this. not only does it tell you what went wrong, but wherever possible it prints out your code annotated with arrows to show exactly where the error is, and makes specific suggestions how to fix the error which usually turn out to be correct. it also has a nice suite of warnings (things that don’t cause compilation to fail but may indicate bugs) that are just as informative, and this can be extended even further by using the clippy linting tool to further analyse your code. warning: unused variable: `y` --> hello.rs: : | | let y = x; | ^ | = note: #[warn(unused_variables)] on by default = note: to avoid this warning, consider using `_y` instead easy to integrate with other languages if you’re like me, you’ll probably only use a low-level language for performance-critical code that you can call from a high-level language, and this is an area where rust shines. most programmers will turn to c, c++ or fortran for this because they have a well established abi (application binary interface) which can be understood by languages like python and r . in rust, it’s trivial to make a c-compatible shared library, and the standard library includes extra features for working with c types. that also means that existing c code can be incrementally ported to rust: see remacs for an example. on top of this, there are projects like rust-cpython and pyo which provide macros and structures that wrap the python c api to let you build python modules in rust with minimal glue code; rustr does a similar job for r. nice language features rust has some really nice features, which let you write efficient, concise and correct code. several feel particularly comfortable as they remind me of similar things available in haskell, including: enums, a super-powered combination of c enums and unions (similar to haskell’s algebraic data types) that enable some really nice code with no runtime cost generics and traits that let you get more done with less code pattern matching, a kind of case statement that lets you extract parts of structs, tuples & enums and do all sorts of other clever things lazy computation based on an iterator pattern, for efficient processing of lists of things: you can do for item in list { ... } instead of the c-style use of an index , or you can use higher-order functions like map and filter functions/closures as first-class citizens scientific computing although it’s a general-purpose language and not designed specifically for scientific computing, rust’s support is improving all the time. there are some interesting matrix algebra libraries available, and built-in simd is incoming. the memory safety features also work to ensure thread safety, so it’s harder to write concurrency bugs. you should be able to use your favourite mpi implementation too, and there’s at least one attempt to portably wrap mpi in a more rust-like way. active development and friendly community one of the things you notice straight away is how active and friendly the rust community is. there are several irc channels on irc.mozilla.org including #rust-beginners, which is a great place to get help. the compiler is under constant but carefully-managed development, so that new features are landing all the time but without breaking existing code. and the fabulous cargo build tool and crates.io are enabling the rapid growth of a healthy ecosystem of open source libraries that you can use to write less code yourself. summary so, next time you need a compiled language to speed up hotspots in your code, try rust. i promise you won’t regret it! julia actually allows you to call c and fortran functions as a first-class language feature &#x a ;þ e; actually, since c++ there’s for (auto item : list) { ... } but still… &#x a ;þ e; reflections on #aoc trees reflected in a lake joshua reddekopp on unsplash it seems like ages ago, but way back in november i committed to completing advent of code. i managed it all, and it was fun! all of my code is available on github if you’re interested in seeing what i did, and i managed to get out a blog post for every one with a bit more commentary, which you can see in the series list above. how did i approach it? i’ve not really done any serious programming challenges before. i don’t get to write a lot of code at the moment, so all i wanted from aoc was an excuse to do some proper problem-solving. i never really intended to take a polyglot approach, though i did think that i might use mainly python with a bit of haskell. in the end, though, i used: python (× ); haskell (× ); rust (× ); go; c++; ruby; julia; and coconut. for the most part, my priorities were getting the right answer, followed by writing readable code. i didn’t specifically focus on performance but did try to avoid falling into traps that i knew about. what did i learn? i found python the easiest to get on with: it’s the language i know best and although i can’t always remember exact method names and parameters i know what’s available and where to look to remind myself, as well as most of the common idioms and some performance traps to avoid. python was therefore the language that let me focus most on solving the problem itself. c++ and ruby were more challenging, and it was harder to write good idiomatic code but i can still remember quite a lot. haskell i haven’t used since university, and just like back then i really enjoyed working out how to solve problems in a functional style while still being readable and efficient (not always something i achieved…). i learned a lot about core haskell concepts like monads & functors, and i’m really amazed by the way the haskell community and ecosystem has grown up in the last decade. i also wanted to learn at least one modern, memory-safe compiled language, so i tried both go and rust. both seem like useful languages, but rust really intrigued me with its conceptual similarities to both haskell and c++ and its promise of memory safety without a garbage collector. i struggled a lot initially with the “borrow checker” (the component that enforces memory safety at compile time) but eventually started thinking in terms of ownership and lifetimes after which things became easier. the rust community seems really vibrant and friendly too. what next? i really want to keep this up, so i’m going to look out some more programming challenges (project euler looks interesting). it turns out there’s a regular code dojo meetup in leeds, so hopefully i’ll try that out too. i’d like to do more realistic data-science stuff, so i’ll be taking a closer look at stuff like kaggle too, and figuring out how to do a bit more analysis at work. i’m also feeling motivated to find an open source project to contribute to and/or release a project of my own, so we’ll see if that goes anywhere! i’ve always found the advice to “scratch your own itch” difficult to follow because everything i think of myself has already been done better. most of the projects i use enough to want to contribute to tend to be pretty well developed with big communities and any bugs that might be accessible to me will be picked off and fixed before i have a chance to get started. maybe it’s time to get over myself and just reimplement something that already exists, just for the fun of it! the halting problem — python — #adventofcode day today’s challenge, takes us back to a bit of computing history: a good old-fashioned turing machine. → full code on github !!! commentary today’s challenge was a nice bit of nostalgia, taking me back to my university days learning about the theory of computing. turing machines are a classic bit of computing theory, and are provably able to compute any value that is possible to compute: a value is computable if and only if a turing machine can be written that computes it (though in practice anything non-trivial is mind-bendingly hard to write as a tm). a bit of a library-fest today, compared to other days! from collections import deque, namedtuple from collections.abc import iterator from tqdm import tqdm import re import fileinput as fi these regular expressions are used to parse the input that defines the transition table for the machine. re_istate = re.compile(r&# ;begin in state (?p<state>\w+)\.&# ;) re_runtime = re.compile( r&# ;perform a diagnostic checksum after (?p<steps>\d+) steps.&# ;) re_statetrans = re.compile( r&# ;in state (?p<state>\w+):\n&# ; r&# ; if the current value is (?p<read >\d+):\n&# ; r&# ; - write the value (?p<write >\d+)\.\n&# ; r&# ; - move one slot to the (?p<move >left|right).\n&# ; r&# ; - continue with state (?p<next >\w+).\n&# ; r&# ; if the current value is (?p<read >\d+):\n&# ; r&# ; - write the value (?p<write >\d+)\.\n&# ; r&# ; - move one slot to the (?p<move >left|right).\n&# ; r&# ; - continue with state (?p<next >\w+).&# ;) move = {&# ;left&# ;: - , &# ;right&# ;: } a namedtuple to provide some sugar when using a transition rule. rule = namedtuple(&# ;rule&# ;, &# ;write move next_state&# ;) the turingmachine class does all the work. class turingmachine: def __init__(self, program=none): self.tape = deque() self.transition_table = {} self.state = none self.runtime = self.steps = self.pos = self.offset = if program is not none: self.load(program) def __str__(self): return f&# ;current: {self.state}; steps: {self.steps} of {self.runtime}&# ; some jiggery-pokery to allow us to use self[pos] to reference an infinite tape. def __getitem__(self, i): i += self.offset if i < or i >= len(self.tape): return else: return self.tape[i] def __setitem__(self, i, x): i += self.offset if i >= and i < len(self.tape): self.tape[i] = x elif i == - : self.tape.appendleft(x) self.offset += elif i == len(self.tape): self.tape.append(x) else: raise indexerror(&# ;tried to set position off end of tape&# ;) parse the program and set up the transtion table. def load(self, program): if isinstance(program, iterator): program = &# ;&# ;.join(program) match = re_istate.search(program) self.state = match[&# ;state&# ;] match = re_runtime.search(program) self.runtime = int(match[&# ;steps&# ;]) for match in re_statetrans.finditer(program): self.transition_table[match[&# ;state&# ;]] = { int(match[&# ;read &# ;]): rule(write=int(match[&# ;write &# ;]), move=move[match[&# ;move &# ;]], next_state=match[&# ;next &# ;]), int(match[&# ;read &# ;]): rule(write=int(match[&# ;write &# ;]), move=move[match[&# ;move &# ;]], next_state=match[&# ;next &# ;]), } run the program for the required number of steps (given by self.runtime). tqdm isn’t in the standard library but it should be: it shows a lovely text-mode progress bar as we go. def run(self): for _ in tqdm(range(self.runtime), desc=&# ;running&# ;, unit=&# ;steps&# ;, unit_scale=true): read = self[self.pos] rule = self.transition_table[self.state][read] self[self.pos] = rule.write self.pos += rule.move self.state = rule.next_state calculate the “diagnostic checksum” required for the answer. @property def checksum(self): return sum(self.tape) aaand go! machine = turingmachine(fi.input()) machine.run() print(&# ;checksum:&# ;, machine.checksum) electromagnetic moat — rust — #adventofcode day today’s challenge, the penultimate, requires us to build a bridge capable of reaching across to the cpu, our final destination. → full code on github !!! commentary we have a finite number of components that fit together in a restricted way from which to build a bridge, and we have to work out both the strongest and the longest bridge we can build. the most obvious way to do this is to recursively build every possible bridge and select the best, but that’s an o(n!) algorithm that could blow up quickly, so might as well go with a nice fast language! might have to try this in haskell too, because it’s the type of algorithm that lends itself naturally to a pure functional approach. i feel like i've applied some of the things i've learned in previous challenges i used rust for, and spent less time mucking about with ownership, and made better use of various language features, including structs and iterators. i'm rather pleased with how my learning of this language is progressing. i'm definitely overusing `option.unwrap` at the moment though: this is a lazy way to deal with `option` results and will panic if the result is not what's expected. i'm not sure whether i need to be cloning the components `vector` either, or whether i could just be passing iterators around. first, we import some bits of standard library and define some data types. the bridgeresult struct lets us use the same algorithm for both parts of the challenge and simply change the value used to calculate the maximum. use std::io; use std::fmt; use std::io::bufread; #[derive(debug, copy, clone, partialeq, eq, hash)] struct component(u , u ); #[derive(debug, copy, clone, default)] struct bridgeresult { strength: u , length: u , } impl component { fn from_str(s: &str) -> component { let parts: vec<&str> = s.split(&# ;/&# ;).collect(); assert!(parts.len() == ); component(parts[ ].parse().unwrap(), parts[ ].parse().unwrap()) } fn fits(self, port: u ) -> bool { self. == port || self. == port } fn other_end(self, port: u ) -> u { if self. == port { return self. ; } else if self. == port { return self. ; } else { panic!(&# ;{} doesn&# ;t fit port {}&# ;, self, port); } } fn strength(self) -> u { self. as u + self. as u } } impl fmt::display for bridgeresult { fn fmt(&self, f: &mut fmt::formatter) -> fmt::result { write!(f, &# ;(s: {}, l: {})&# ;, self.strength, self.length) } } best_bridge calculates the length and strength of the “best” bridge that can be built from the remaining components and fits the required port. whether this is based on strength or length is given by the key parameter, which is passed to iter.max_by_key. fn best_bridge<f>(port: u , key: &f, components: &vec<component>) -> option<bridgeresult> where f: fn(&bridgeresult) -> u { if components.len() == { return none; } components.iter() .filter(|c| c.fits(port)) .map(|c| { let b = best_bridge(c.other_end(port), key, &components.clone().into_iter() .filter(|x| x != c).collect()) .unwrap_or_default(); bridgeresult{strength: c.strength() + b.strength, length: + b.length} }) .max_by_key(key) } now all that remains is to read the input and calculate the result. i was rather pleasantly surprised to find that in spite of my pessimistic predictions about efficiency, when compiled with optimisations turned on this terminates in less than s on my laptop. fn main() { let stdin = io::stdin(); let components: vec<_> = stdin.lock() .lines() .map(|l| component::from_str(&l.unwrap())) .collect(); match best_bridge( , &|b: &bridgeresult| b.strength, &components) { some(b) => println!(&# ;strongest bridge is {}&# ;, b), none => println!(&# ;no strongest bridge found&# ;) }; match best_bridge( , &|b: &bridgeresult| b.length, &components) { some(b) => println!(&# ;longest bridge is {}&# ;, b), none => println!(&# ;no longest bridge found&# ;) }; } coprocessor conflagration — haskell — #adventofcode day today’s challenge requires us to understand why a coprocessor is working so hard to perform an apparently simple calculation. → full code on github !!! commentary today’s problem is based on an assembly-like language very similar to day , so i went back and adapted my code from that, which works well for the first part. i’ve also incorporated some advice from /r/haskell, and cleaned up all warnings shown by the -wall compiler flag and the hlint tool. part requires the algorithm to run with much larger inputs, and since some analysis shows that it's an `o(n^ )` algorithm it gets intractible pretty fast. there are several approaches to this. first up, if you have a fast enough processor and an efficient enough implementation i suspect that the simulation would probably terminate eventually, but that would likely still take hours: not good enough. i also thought about doing some peephole optimisations on the instructions, but the last time i did compiler optimisation was my degree so i wasn't really sure where to start. what i ended up doing was actually analysing the input code by hand to figure out what it was doing, and then just doing that calculation in a sensible way. i'd like to say i managed this on my own (and i ike to think i would have) but i did get some tips on [/r/adventofcode](https://reddit.com/r/adventofcode). the majority of this code is simply a cleaned-up version of day , with some tweaks to accommodate the different instruction set: module main where import qualified data.vector as v import qualified data.map.strict as m import control.monad.state.strict import text.parsercombinators.parsec hiding (state) type register = char type value = int type argument = either value register data instruction = set register argument | sub register argument | mul register argument | jnz argument argument deriving show type program = v.vector instruction data result = cont | halt deriving (eq, show) type registers = m.map char int data machine = machine { dregisters :: registers , dptr :: !int , dmulcount :: !int , dprogram :: program } instance show machine where show d = show (dregisters d) ++ &# ; @&# ; ++ show (dptr d) ++ &# ; ×&# ; ++ show (dmulcount d) defaultmachine :: machine defaultmachine = machine m.empty v.empty type machinestate = state machine program :: genparser char st program program = do instructions <- endby instruction eol return $ v.fromlist instructions where instruction = try (regop &# ;set&# ; set) <|> regop &# ;sub&# ; sub <|> regop &# ;mul&# ; mul <|> jump &# ;jnz&# ; jnz regop n c = do string n >> spaces val <- oneof &# ;abcdefgh&# ; secondarg c val jump n c = do string n >> spaces val <- regorval secondarg c val secondarg c val = do spaces val <- regorval return $ c val val regorval = register <|> value register = do name <- lower return $ right name value = do val <- many $ oneof &# ;- &# ; return $ left $ read val eol = char &# ;\n&# ; parseprogram :: string -> either parseerror program parseprogram = parse program &# ;&# ; getreg :: char -> machinestate int getreg r = do st <- get return $ m.findwithdefault r (dregisters st) putreg :: char -> int -> machinestate () putreg r v = do st <- get let current = dregisters st new = m.insert r v current put $ st { dregisters = new } modreg :: (int -> int -> int) -> char -> argument -> machinestate () modreg op r v = do u <- getreg r v&# ; <- getregorval v putreg r (u `op` v&# ;) incptr getregorval :: argument -> machinestate int getregorval = either return getreg addptr :: int -> machinestate () addptr n = do st <- get put $ st { dptr = n + dptr st } incptr :: machinestate () incptr = addptr execinst :: instruction -> machinestate () execinst (set reg val) = do newval <- getregorval val putreg reg newval incptr execinst (mul reg val) = do result <- modreg (*) reg val st <- get put $ st { dmulcount = + dmulcount st } return result execinst (sub reg val) = modreg (-) reg val execinst (jnz val val ) = do test <- getregorval val jump <- if test /= then getregorval val else return addptr jump execnext :: machinestate result execnext = do st <- get let prog = dprogram st p = dptr st if p >= length prog then return halt else do execinst (prog v.! p) return cont rununtilterm :: machinestate () rununtilterm = do result <- execnext unless (result == halt) rununtilterm this implements the actual calculation: the number of non-primes between (for my input) and : optimisedcalc :: int -> int -> int -> int optimisedcalc a b k = sum $ map (const ) $ filter notprime [a,a+k..b] where notprime n = elem $ map (mod n) [ ..(floor $ sqrt (fromintegral n :: double))] main :: io () main = do input <- getcontents case parseprogram input of right prog -> do let c = defaultmachine { dprogram = prog } (_, c&# ;) = runstate rununtilterm c putstrln $ show (dmulcount c&# ;) ++ &# ; multiplications made&# ; putstrln $ &# ;calculation result: &# ; ++ show (optimisedcalc ) left e -> print e sporifica virus — rust — #adventofcode day today’s challenge has us helping to clean up (or spread, i can’t really tell) an infection of the “sporifica” virus. → full code on github !!! commentary i thought i’d have another play with rust, as its haskell-like features resonate with me at the moment. i struggled quite a lot with the rust concepts of ownership and borrowing, and this is a cleaned-up version of the code based on some good advice from the folks on /r/rust. use std::io; use std::env; use std::io::bufread; use std::collections::hashmap; #[derive(partialeq, clone, copy, debug)] enum direction {up, right, down, left} #[derive(partialeq, clone, copy, debug)] enum infection {clean, weakened, infected, flagged} use self::direction::*; use self::infection::*; type grid = hashmap<(isize, isize), infection>; fn turn_left(d: direction) -> direction { match d {up => left, right => up, down => right, left => down} } fn turn_right(d: direction) -> direction { match d {up => right, right => down, down => left, left => up} } fn turn_around(d: direction) -> direction { match d {up => down, right => left, down => up, left => right} } fn make_move(d: direction, x: isize, y: isize) -> (isize, isize) { match d { up => (x- , y), right => (x, y+ ), down => (x+ , y), left => (x, y- ), } } fn basic_step(grid: &mut grid, x: &mut isize, y: &mut isize, d: &mut direction) -> usize { let mut infect = ; let current = match grid.get(&(*x, *y)) { some(v) => *v, none => clean, }; if current == infected { *d = turn_right(*d); } else { *d = turn_left(*d); infect = ; }; grid.insert((*x, *y), match current { clean => infected, infected => clean, x => panic!(&# ;unexpected infection state {:?}&# ;, x), }); let new_pos = make_move(*d, *x, *y); *x = new_pos. ; *y = new_pos. ; infect } fn nasty_step(grid: &mut grid, x: &mut isize, y: &mut isize, d: &mut direction) -> usize { let mut infect = ; let new_state: infection; let current = match grid.get(&(*x, *y)) { some(v) => *v, none => infection::clean, }; match current { clean => { *d = turn_left(*d); new_state = weakened; }, weakened => { new_state = infected; infect = ; }, infected => { *d = turn_right(*d); new_state = flagged; }, flagged => { *d = turn_around(*d); new_state = clean; } }; grid.insert((*x, *y), new_state); let new_pos = make_move(*d, *x, *y); *x = new_pos. ; *y = new_pos. ; infect } fn virus_infect<f>(mut grid: grid, mut step: f, mut x: isize, mut y: isize, mut d: direction, n: usize) -> usize where f: fnmut(&mut grid, &mut isize, &mut isize, &mut direction) -> usize, { ( ..n).map(|_| step(&mut grid, &mut x, &mut y, &mut d)) .sum() } fn main() { let args: vec<string> = env::args().collect(); let n_basic: usize = args[ ].parse().unwrap(); let n_nasty: usize = args[ ].parse().unwrap(); let stdin = io::stdin(); let lines: vec<string> = stdin.lock() .lines() .map(|x| x.unwrap()) .collect(); let mut grid: grid = hashmap::new(); let x = (lines.len() / ) as isize; let y = (lines[ ].len() / ) as isize; for (i, line) in lines.iter().enumerate() { for (j, c) in line.chars().enumerate() { grid.insert((i as isize, j as isize), match c {&# ;#&# ; => infected, _ => clean}); } } let basic_steps = virus_infect(grid.clone(), basic_step, x , y , up, n_basic); println!(&# ;basic: infected {} times&# ;, basic_steps); let nasty_steps = virus_infect(grid, nasty_step, x , y , up, n_nasty); println!(&# ;nasty: infected {} times&# ;, nasty_steps); } fractal art — python — #adventofcode day today’s challenge asks us to assist an artist building fractal patterns from a rulebook. → full code on github !!! commentary another fairly straightforward algorithm: the really tricky part was breaking the pattern up into chunks and rejoining it again. i could probably have done that more efficiently, and would have needed to if i had to go for a few more iterations and the grid grows with every iteration and gets big fast. still behind on the blog posts… import fileinput as fi from math import sqrt from functools import reduce, partial import operator initial_pattern = (( , , ), ( , , ), ( , , )) decode = [&# ;.&# ;, &# ;#&# ;] encode = {&# ;.&# ;: , &# ;#&# ;: } concat = partial(reduce, operator.concat) def rotate(p): size = len(p) return tuple(tuple(p[i][j] for i in range(size)) for j in range(size - , - , - )) def flip(p): return tuple(p[i] for i in range(len(p) - , - , - )) def permutations(p): yield p yield flip(p) for _ in range( ): p = rotate(p) yield p yield flip(p) def print_pattern(p): print(&# ;-&# ; * len(p)) for row in p: print(&# ; &# ;.join(decode[x] for x in row)) print(&# ;-&# ; * len(p)) def build_pattern(s): return tuple(tuple(encode[c] for c in row) for row in s.split(&# ;/&# ;)) def build_pattern_book(lines): book = {} for line in lines: source, target = line.strip().split(&# ; => &# ;) for rotation in permutations(build_pattern(source)): book[rotation] = build_pattern(target) return book def subdivide(pattern): size = if len(pattern) % == else n = len(pattern) // size return (tuple(tuple(pattern[i][j] for j in range(y * size, (y + ) * size)) for i in range(x * size, (x + ) * size)) for x in range(n) for y in range(n)) def rejoin(parts): n = int(sqrt(len(parts))) size = len(parts[ ]) return tuple(concat(parts[i + k][j] for i in range(n)) for k in range( , len(parts), n) for j in range(size)) def enhance_once(p, book): return rejoin(tuple(book[part] for part in subdivide(p))) def enhance(p, book, n, progress=none): for _ in range(n): p = enhance_once(p, book) return p book = build_pattern_book(fi.input()) intermediate_pattern = enhance(initial_pattern, book, ) print(&# ;after iterations:&# ;, sum(sum(row) for row in intermediate_pattern)) final_pattern = enhance(intermediate_pattern, book, ) print(&# ;after iterations:&# ;, sum(sum(row) for row in final_pattern)) particle swarm — python — #adventofcode day today’s challenge finds us simulating the movements of particles in space. → full code on github !!! commentary back to python for this one, another relatively straightforward simulation, although it’s easier to calculate the answer to part than to simulate. import fileinput as fi import numpy as np import re first we parse the input into d arrays: using numpy enables us to do efficient arithmetic across the whole set of particles in one go. particle_re = re.compile(r&# ;p=<(-?\d+),(-?\d+),(-?\d+)>, &# ; r&# ;v=<(-?\d+),(-?\d+),(-?\d+)>, &# ; r&# ;a=<(-?\d+),(-?\d+),(-?\d+)>&# ;) def parse_input(lines): x = [] v = [] a = [] for l in lines: m = particle_re.match(l) x.append([int(x) for x in m.group( , , )]) v.append([int(x) for x in m.group( , , )]) a.append([int(x) for x in m.group( , , )]) return (np.arange(len(x)), np.array(x), np.array(v), np.array(a)) i, x, v, a = parse_input(fi.input()) now we can calculate which particle will be closest to the origin in the long-term: this is simply the particle with the smallest acceleration. it turns out that several have the same acceleration, so of these, the one we want is the one with the lowest starting velocity. this is only complicated slightly by the need to get the number of the particle rather than its other information, hence the need to use numpy.argmin. a_abs = np.sum(np.abs(a), axis= ) a_min = np.min(a_abs) a_i = np.squeeze(np.argwhere(a_abs == a_min)) closest = i[a_i[np.argmin(np.sum(np.abs(v[a_i]), axis= ))]] print(&# ;closest: &# ;, closest) now we define functions to simulate collisions between particles. we have to use the return_index and return_counts options to numpy.unique to be able to get rid of all the duplicate positions (the standard usage is to keep one of each duplicate). def resolve_collisions(x, v, a): (_, i, c) = np.unique(x, return_index=true, return_counts=true, axis= ) i = i[c == ] return x[i], v[i], a[i] the termination criterion for this loop is an interesting aspect: the most robust to my mind seems to be that eventually the particles will end up sorted in order of their initial acceleration in terms of distance from the origin, so you could check for this but that’s pretty computationally expensive. in the end, all that was needed was a bit of trial and error: terminating arbitrarily after , iterations seems to work! in fact, all the collisions are over after about iterations for my input but there was always the possibility that two particles with very slightly different accelerations would eventually intersect much later. def simulate_collisions(x, v, a, iterations= ): for _ in range(iterations): v += a x += v x, v, a = resolve_collisions(x, v, a) return len(x) print(&# ;remaining particles: &# ;, simulate_collisions(x, v, a)) a series of tubes — rust — #adventofcode day today’s challenge asks us to help a network packet find its way. → full code on github !!! commentary today’s challenge was fairly straightforward, following an ascii art path, so i thought i’d give rust another try. i’m a bit behind on the blog posts, so i’m presenting the code below without any further commentary. i’m not really convinced this is good idiomatic rust, and it was interesting turning a set of strings into a d array of characters because there are both u (byte) and char types to deal with. use std::io; use std::io::bufread; const alpha: &&# ;static str = &# ;abcdefghijklmnopqrstuvwxyz&# ;; fn change_direction(dia: &vec<vec<u >>, x: usize, y: usize, dx: &mut i , dy: &mut i ) { assert_eq!(dia[x][y], b&# ;+&# ;); if dx.abs() == { *dx = ; if y + < dia[x].len() && (dia[x][y + ] == b&# ;-&# ; || alpha.contains(dia[x][y + ] as char)) { *dy = ; } else if dia[x][y - ] == b&# ;-&# ; || alpha.contains(dia[x][y - ] as char) { *dy = - ; } else { panic!(&# ;huh? {} {}&# ;, dia[x][y+ ] as char, dia[x][y- ] as char); } } else { *dy = ; if x + < dia.len() && (dia[x + ][y] == b&# ;|&# ; || alpha.contains(dia[x + ][y] as char)) { *dx = ; } else if dia[x - ][y] == b&# ;|&# ; || alpha.contains(dia[x - ][y] as char) { *dx = - ; } else { panic!(&# ;huh?&# ;); } } } fn follow_route(dia: vec<vec<u >>) -> (string, i ) { let mut x: i = ; let mut y: i ; let mut dx: i = ; let mut dy: i = ; let mut result = string::new(); let mut steps = ; match dia[ ].iter().position(|x| *x == b&# ;|&# ;) { some(i) => y = i as i , none => panic!(&# ;could not find &# ;|&# ; in first row&# ;), } loop { x += dx; y += dy; match dia[x as usize][y as usize] { b&# ;a&# ;...b&# ;z&# ; => result.push(dia[x as usize][y as usize] as char), b&# ;+&# ; => change_direction(&dia, x as usize, y as usize, &mut dx, &mut dy), b&# ; &# ; => return (result, steps), _ => (), } steps += ; } } fn main() { let stdin = io::stdin(); let lines: vec<vec<u >> = stdin.lock().lines() .map(|l| l.unwrap().into_bytes()) .collect(); let result = follow_route(lines); println!(&# ;route: {}&# ;, result. ); println!(&# ;steps: {}&# ;, result. ); } duet — haskell — #adventofcode day today’s challenge introduces a type of simplified assembly language that includes instructions for message-passing. first we have to simulate a single program (after humorously misinterpreting the snd and rcv instructions as “sound” and “recover”), but then we have to simulate two concurrent processes and the message passing between them. → full code on github !!! commentary well, i really learned a lot from this one! i wanted to get to grips with more complex stuff in haskell and this challenge seemed like an excellent opportunity to figure out a) parsing with the parsec library and b) using the state monad to keep the state of the simulator. as it turned out, that wasn't all i'd learned: i also ran into an interesting situation whereby lazy evaluation was creating an infinite loop where there shouldn't be one, so i also had to learn how to selectively force strict evaluation of values. i'm pretty sure this isn't the best haskell in the world, but i'm proud of it. first we have to import a bunch of stuff to use later, but also notice the pragma on the first line which instructs the compiler to enable the bangpatterns language extension, which will be important later. {-# language bangpatterns #-} module main where import qualified data.vector as v import qualified data.map.strict as m import data.list import data.either import data.maybe import control.monad.state.strict import control.monad.loops import text.parsercombinators.parsec hiding (state) first up we define the types that will represent the program code itself. data duetval = reg char | val int deriving show type duetqueue = [int] data duetinstruction = snd duetval | rcv duetval | jgz duetval duetval | set duetval duetval | add duetval duetval | mul duetval duetval | mod duetval duetval deriving show type duetprogram = v.vector duetinstruction next we define the types to hold the machine state, which includes: registers, instruction pointer, send & receive buffers and the program code, plus a counter of the number of sends made (to provide the solution). type duetregisters = m.map char int data duet = duet { dregisters :: duetregisters , dptr :: int , dsendcount :: int , drcvbuf :: duetqueue , dsndbuf :: duetqueue , dprogram :: duetprogram } instance show duet where show d = show (dregisters d) ++ &# ; @&# ; ++ show (dptr d) ++ &# ; s&# ; ++ show (dsndbuf d) ++ &# ; r&# ; ++ show (drcvbuf d) defaultduet = duet m.empty [] [] v.empty type duetstate = state duet program is a parser built on the cool parsec library to turn the program text into a haskell format that we can work with, a vector of instructions. yes, using a full-blown parser is overkill here (it would be much simpler just to split each line on whitespace, but i wanted to see how parsec works. i’m using vector here because we need random access to the instruction list, which is much more efficient with vector: o( ) compared with the o(n) of the built in haskell list ([]) type. parseprogram applies the parser to a string and returns the result. program :: genparser char st duetprogram program = do instructions <- endby instruction eol return $ v.fromlist instructions where instruction = try (onearg &# ;snd&# ; snd) <|> onearg &# ;rcv&# ; rcv <|> twoarg &# ;set&# ; set <|> twoarg &# ;add&# ; add <|> try (twoarg &# ;mul&# ; mul) <|> twoarg &# ;mod&# ; mod <|> twoarg &# ;jgz&# ; jgz onearg n c = do string n >> spaces val <- regorval return $ c val twoarg n c = do string n >> spaces val <- regorval spaces val <- regorval return $ c val val regorval = register <|> value register = do name <- lower return $ reg name value = do val <- many $ oneof &# ;- &# ; return $ val $ read val eol = char &# ;\n&# ; parseprogram :: string -> either parseerror duetprogram parseprogram = parse program &# ;&# ; next up we have some utility functions that sit in the duetstate monad we defined above and perform common manipulations on the state: getting/setting/updating registers, updating the instruction pointer and sending/receiving messages via the relevant queues. getreg :: char -> duetstate int getreg r = do st <- get return $ m.findwithdefault r (dregisters st) putreg :: char -> int -> duetstate () putreg r v = do st <- get let current = dregisters st new = m.insert r v current put $ st { dregisters = new } modreg :: (int -> int -> int) -> char -> duetval -> duetstate bool modreg op r v = do u <- getreg r v&# ; <- getregorval v putreg r (u `op` v&# ;) incptr return false getregorval :: duetval -> duetstate int getregorval (reg r) = getreg r getregorval (val v) = return v addptr :: int -> duetstate () addptr n = do st <- get put $ st { dptr = n + dptr st } incptr = addptr send :: int -> duetstate () send v = do st <- get put $ st { dsndbuf = (dsndbuf st ++ [v]), dsendcount = dsendcount st + } recv :: duetstate (maybe int) recv = do st <- get case drcvbuf st of (x:xs) -> do put $ st { drcvbuf = xs } return $ just x [] -> return nothing execinst implements the logic for each instruction. it returns false as long as the program can continue, but true if the program tries to receive from an empty buffer. execinst :: duetinstruction -> duetstate bool execinst (set (reg reg) val) = do newval <- getregorval val putreg reg newval incptr return false execinst (mul (reg reg) val) = modreg (*) reg val execinst (add (reg reg) val) = modreg (+) reg val execinst (mod (reg reg) val) = modreg mod reg val execinst (jgz val val ) = do st <- get test <- getregorval val jump <- if test > then getregorval val else return addptr jump return false execinst (snd val) = do v <- getregorval val send v st <- get incptr return false execinst (rcv (reg r)) = do st <- get v <- recv handle v where handle :: maybe int -> duetstate bool handle (just x) = putreg r x >> incptr >> return false handle nothing = return true execinst x = error $ &# ;execinst not implemented yet for &# ; ++ show x execnext looks up the next instruction and executes it. rununtilwait runs the program until execnext returns true to signal the wait state has been reached. execnext :: duetstate bool execnext = do st <- get let prog = dprogram st p = dptr st if p >= length prog then return true else execinst (prog v.! p) rununtilwait :: duetstate () rununtilwait = do waiting <- execnext unless waiting rununtilwait runtwoprograms handles the concurrent running of two programs, by running first one and then the other to a wait state, then swapping each program’s send buffer to the other’s receive buffer before repeating. if you look carefully, you’ll see a “bang” (!) before the two arguments of the function: runtwoprograms !d !d . haskell is a lazy language and usually doesn’t evaluate a computation until you ask for a result, instead carrying around a “thunk” or plan for how to carry out the computation. sometimes that can be a problem because the amount of memory your program is using can explode unnecessarily as a long computation turns into a large thunk which isn’t evaluated until the very end. that’s not the problem here though. what happens here without the bangs is another side-effect of laziness. the exit condition of this recursive function is that a deadlock has been reached: both programs are waiting to receive, but neither has sent anything, so neither can ever continue. the check for this is (null $ dsndbuf d ') && (null $ dsndbuf d '). as long as the first program has something in its send buffer, the test fails without ever evaluating the second part, which means the result d ' of running the second program is never needed. the function immediately goes to the recursive case and tries to continue the first program again, which immediately returns because it’s still waiting to receive. the same thing happens again, and the result is that instead of running the second program to obtain something for the first to receive, we get into an infinite loop trying and failing to continue the first program. the bang forces both d and d to be evaluated at the point we recurse, which forces the rest of the computation: running the second program and swapping the send/receive buffers. with that, the evaluation proceeds correctly and we terminate with a result instead of getting into an infinite loop! runtwoprograms :: duet -> duet -> (int, int) runtwoprograms !d !d | (null $ dsndbuf d &# ;) && (null $ dsndbuf d &# ;) = (dsendcount d &# ;, dsendcount d &# ;) | otherwise = runtwoprograms d &# ;&# ; d &# ;&# ; where (_, d &# ;) = runstate rununtilwait d (_, d &# ;) = runstate rununtilwait d d &# ;&# ; = d &# ; { dsndbuf = [], drcvbuf = dsndbuf d &# ; } d &# ;&# ; = d &# ; { dsndbuf = [], drcvbuf = dsndbuf d &# ; } all that remains to be done now is to run the programs and see how many messages were sent before the deadlock. main = do prog <- fmap (fromright v.empty . parseprogram) getcontents let d = defaultduet { dprogram = prog, dregisters = m.fromlist [(&# ;p&# ;, )] } d = defaultduet { dprogram = prog, dregisters = m.fromlist [(&# ;p&# ;, )] } (send , send ) = runtwoprograms d d putstrln $ &# ;program sent &# ; ++ show send ++ &# ; messages&# ; putstrln $ &# ;program sent &# ; ++ show send ++ &# ; messages&# ; spinlock — rust/python — #adventofcode day in today’s challenge we deal with a monstrous whirlwind of a program, eating up cpu and memory in equal measure. → full code on github (and python driver script) !!! commentary one of the things i wanted from aoc was an opportunity to try out some popular languages that i don’t currently know, including the memory-safe, strongly-typed compiled languages go and rust. realistically though, i’m likely to continue doing most of my programming in python, and use one of these other languages when it has better tools or i need the extra speed. in which case, what i really want to know is how i can call functions written in go or rust from python. i thought i'd try rust first, as it seems to be designed to be c-compatible and that makes it easy to call from python using [`ctypes`](https://docs.python.org/ . /library/ctypes.html). part was another straightforward simulation: translate what the "spinlock" monster is doing into code and run it. it was pretty obvious from the story of this challenge and experience of the last few days that this was going to be another one where the simulation is too computationally expensive for part two, which turns out to be correct. so, first thing to do is to implement the meat of the solution in rust. spinlock solves the first part of the problem by doing exactly what the monster does. since we only have to go up to iterations, this is very tractable. the last number we insert is , so we just return the number immediately after that. #[no_mangle] pub extern fn spinlock(n: usize, skip: usize) -> i { let mut buffer: vec<i > = vec::with_capacity(n+ ); buffer.push( ); buffer.push( ); let mut pos = ; for i in ..n+ { pos = (pos + skip + ) % buffer.len(); buffer.insert(pos, i as i ); } pos = (pos + ) % buffer.len(); return buffer[pos]; } for the second part, we have to do million iterations instead, which is a lot. given that every time you insert an item in the list it has to move up all the elements after that position, i’m pretty sure the algorithm is o(n^ ), so it’s going to take a lot longer than , ish times the first part. thankfully, we don’t need to build the whole list, just keep track of where is and what number is immediately after it. there may be a closed-form solution to simply calculate the result, but i couldn’t think of it and this is good enough. #[no_mangle] pub extern fn spinlock (n: usize, skip: usize) -> i { let mut pos = ; let mut pos_ = ; let mut after_ = ; for i in ..n+ { pos = (pos + skip + ) % i; if pos == pos_ + { after_ = i; } if pos <= pos_ { pos_ += ; } } return after_ as i ; } now it’s time to call this code from python. notice the #[no_mangle] pragmas and pub extern declarations for each function above, which are required to make sure the functions are exported in a c-compatible way. we can build this into a shared library like this: rustc --crate-type=cdylib -o spinlock.so -spinlock.rs the python script is as simple as loading this library, reading the puzzle input from the command line and calling the functions. the ctypes module does a lot of magic so that we don’t have to worry about converting from python types to native types and back again. import ctypes import sys lib = ctypes.cdll.loadlibrary(&# ;./spinlock.so&# ;) skip = int(sys.argv[ ]) print(&# ;part :&# ;, lib.spinlock( , skip)) print(&# ;part :&# ;, lib.spinlock ( _ _ , skip)) this is a toy example as far as calling rust from python is concerned, but it’s worth noting that already we can play with the parameters to the two rust functions without having to recompile. for more serious work, i’d probably be looking at something like pyo to make a proper python module. looks like there’s also a very early rust numpy integration for integrating numerical stuff. you can also do the same thing from julia, which has a ccall function built in: ccall((:spinlock, &# ;./spinlock.so&# ;), int , (uint , uint ), , ) my next thing to try might be haskell → python though… permutation promenade — julia — #adventofcode day today’s challenge rather appeals to me as a folk dancer, because it describes a set of instructions for a dance and asks us to work out the positions of the dancing programs after each run through the dance. → full code on github !!! commentary so, part is pretty straight forward: parse the set of instructions, interpret them and keep track of the dancer positions as you go. one time through the dance. however, part asks for the positions after billion (yes, that’s , , , ) times through the dance. in hindsight i should have immediately become suspicious, but i thought i’d at least try the brute force approach first because it was simpler to code. so i give it a try, and after waiting for a while, having a cup of tea etc. it still hasn't terminated. i try reducing the number of iterations to , . now it terminates, but takes about seconds. a spot of arithmetic suggests that running the full version will take a little over years. there must be a better way than that! i'm a little embarassed that i didn't spot the solution immediately (blaming julia) and tried again in python to see if i could get it to terminate quicker. when that didn't work i had to think again. a little further investigation with a while loop shows that in fact the dance position repeats (in the case of my input) every times. after that it becomes much quicker! oh, and it was time for a new language, so i wasted some extra time working out the quirks of [julia][]. first, a function to evaluate a single move — for neatness, this dispatches to a dedicated function depending on the type of move, although this isn’t really necessary to solve the challenge. ending a function name with a bang (!) is a julia convention to indicate that it has side-effects. function eval_move!(move, dancers) move_type = move[ ] params = move[ :end] if move_type == &# ;s&# ; # spin eval_spin!(params, dancers) elseif move_type == &# ;x&# ; # exchange eval_exchange!(params, dancers) elseif move_type == &# ;p&# ; # partner swap eval_partner!(params, dancers) end end these take care of the individual moves. parsing the parameters from a string every single time probably isn’t ideal, but as it turns out, that optimisation isn’t really necessary. note the + in eval_exchange!, which is necessary because julia is one of those crazy languages where indexes start from instead of . these actions are pretty nice to implement, because julia has circshift as a builtin to rotate a list, and allows you to assign to list slices and swap values in place with a single statement. function eval_spin!(params, dancers) shift = parse(int, params) dancers[ :end] = circshift(dancers, shift) end function eval_exchange!(params, dancers) i, j = map(x -> parse(int, x) + , split(params, &# ;/&# ;)) dancers[i], dancers[j] = dancers[j], dancers[i] end function eval_partner!(params, dancers) a, b = split(params, &# ;/&# ;) ia = findfirst([x == a for x in dancers]) ib = findfirst([x == b for x in dancers]) dancers[ia], dancers[ib] = b, a end dance! takes a list of moves and takes the dances once through the dance. function dance!(moves, dancers) for m in moves eval_move!(m, dancers) end end to solve part , we simply need to read the moves in, set up the initial positions of the dances and run the dance through once. join is necessary to a) turn characters into length- strings, and b) convert the list of strings back into a single string to print out. moves = split(readchomp(stdin), &# ;,&# ;) dancers = collect(join(c) for c in &# ;a&# ;:&# ;p&# ;) orig_dancers = copy(dancers) dance!(moves, dancers) println(join(dancers)) part requires a little more work. we run the dance through again and again until we get back to the initial position, saving the intermediate positions in a list. the list now contains every possible position available from that starting point, so we can find position billion by taking , , , modulo the list length (plus because -based indexing) and use that to index into the list to get the final position. dance_cycle = [orig_dancers] while dancers != orig_dancers push!(dance_cycle, copy(dancers)) dance!(moves, dancers) end println(join(dance_cycle[ _ _ _ % length(dance_cycle) + ])) this terminates on my laptop in about . s: brute force ; careful thought ! dueling generators — rust — #adventofcode day today’s challenge introduces two pseudo-random number generators which are trying to agree on a series of numbers. we play the part of the “judge”, counting the number of times their numbers agree in the lowest bits. → full code on github ever since i used go to solve day , i’ve had a hankering to try the other new kid on the memory-safe compiled language block, rust. i found it a bit intimidating at first because the syntax wasn’t as close to the c/c++ i’m familiar with and there are quite a few concepts unique to rust, like the use of traits. but i figured it out, so i can tick another language of my to-try list. i also implemented a version in python for comparison: the python version is more concise and easier to read but the rust version runs about × faster. first we include the std::env “crate” which will let us get access to commandline arguments, and define some useful constants for later. use std::env; const m: i = ; const mask: i = b ; const factor_a: i = ; const factor_b: i = ; gen_next generates the next number for a given generator’s sequence. gen_next_picky does the same, but for the “picky” generators, only returning values that meet their criteria. fn gen_next(factor: i , current: i ) -> i { return (current * factor) % m; } fn gen_next_picky(factor: i , current: i , mult: i ) -> i { let mut next = gen_next(factor, current); while next % mult != { next = gen_next(factor, next); } return next; } duel runs a single duel, and returns the number of times the generators agreed in the lowest bits (found by doing a binary & with the mask defined above). rust allows functions to be passed as parameters, so we use this to be able to run both versions of the duel using only this one function. fn duel<f, g>(n: i , next_a: f, mut value_a: i , next_b: g, mut value_b: i ) -> i where f: fn(i ) -> i , g: fn(i ) -> i , { let mut count = ; for _ in ..n { value_a = next_a(value_a); value_b = next_b(value_b); if (value_a & mask) == (value_b & mask) { count += ; } } return count; } finally, we read the start values from the command line and run the two duels. the expressions that begin |n| are closures (anonymous functions, often called lambdas in other languages) that we use to specify the generator functions for each duel. fn main() { let args: vec<string> = env::args().collect(); let start_a: i = args[ ].parse().unwrap(); let start_b: i = args[ ].parse().unwrap(); println!( &# ;duel : {}&# ;, duel( , |n| gen_next(factor_a, n), start_a, |n| gen_next(factor_b, n), start_b, ) ); println!( &# ;duel : {}&# ;, duel( , |n| gen_next_picky(factor_a, n, ), start_a, |n| gen_next_picky(factor_b, n, ), start_b, ) ); } disk defragmentation — haskell — #adventofcode day today’s challenge has us helping a disk defragmentation program by identifying contiguous regions of used sectors on a d disk. → full code on github !!! commentary wow, today’s challenge had a pretty steep learning curve. day was the first to directly reuse code from a previous day: the “knot hash” from day . i solved day in haskell, so i thought it would be easier to stick with haskell for today as well. the first part was straightforward, but the second was pretty mind-bending in a pure functional language! i ended up solving it by implementing a [flood fill algorithm][flood]. it's recursive, which is right in haskell's wheelhouse, but i ended up using `data.sequence` instead of the standard list type as its api for indexing is better. i haven't tried it, but i think it will also be a little faster than a naive list-based version. it took a looong time to figure everything out, but i had a day off work to be able to concentrate on it! a lot more imports for this solution, as we’re exercising a lot more of the standard library. module main where import prelude hiding (length, filter, take) import data.char (ord) import data.sequence import data.foldable hiding (length) import data.ix (inrange) import data.function ((&)) import data.maybe (fromjust, mapmaybe, isjust) import qualified data.set as set import text.printf (printf) import system.environment (getargs) also we’ll extract the key bits from day into a module and import that. import knothash now we define a few data types to make the code a bit more readable. sector represent the state of a particular disk sector, either free, used (but unmarked) or used and marked as belonging to a given integer-labelled group. grid is a d matrix of sector, as a sequence of sequences. data sector = free | used | mark int deriving (eq) instance show sector where show free = &# ; .&# ; show used = &# ; #&# ; show (mark i) = printf &# ;% d&# ; i type gridrow = seq sector type grid = seq (gridrow) some utility functions to make it easier to view the grids (which can be quite large): used for debugging but not in the finished solution. subgrid :: int -> grid -> grid subgrid n = fmap (take n) . take n printrow :: gridrow -> io () printrow row = do mapm_ (putstr . show) row putstr &# ;\n&# ; printgrid :: grid -> io () printgrid = mapm_ printrow makekey generates the hash key for a given row. makekey :: string -> int -> string makekey input n = input ++ &# ;-&# ; ++ show n stringtogridrow converts a binary string of ‘ ’ and ‘ ’ characters to a sequence of sector values. stringtogridrow :: string -> gridrow stringtogridrow = fromlist . map convert where convert x | x == &# ; &# ; = used | x == &# ; &# ; = free makerow and makegrid build up the grid to use based on the provided input string. makerow :: string -> int -> gridrow makerow input n = stringtogridrow $ concatmap (printf &# ;% b&# ;) $ dense $ fullknothash $ map ord $ makekey input n makegrid :: string -> grid makegrid input = fromlist $ map (makerow input) [ .. ] utility functions to count the number of used and free sectors, to give the solution to part . countequal :: sector -> grid -> int countequal x = sum . fmap (length . filter (==x)) countused = countequal used countfree = countequal free now the real meat begins! fundunmarked finds the location of the next used sector that we haven’t yet marked. it returns a maybe value, which is just (x, y) if there is still an unmarked block or nothing if there’s nothing left to mark. findunmarked :: grid -> maybe (int, int) findunmarked g | y == nothing = nothing | otherwise = just (fromjust x, fromjust y) where hasunmarked row = isjust $ elemindexl used row x = findindexl hasunmarked g y = case x of nothing -> nothing just x&# ; -> elemindexl used $ index g x&# ; floodfill implements a very simple recursive flood fill. it takes a target and replacement value and a starting location, and fills in the replacement value for every connected location that currently has the target value. we use it below to replace a connected used region with a marked region. floodfill :: sector -> sector -> (int, int) -> grid -> grid floodfill t r (x, y) g | inrange ( , length g - ) x && inrange ( , length g - ) y && elem == t = let newrow = update y r row newgrid = update x newrow g in newgrid & floodfill t r (x+ , y) & floodfill t r (x- , y) & floodfill t r (x, y+ ) & floodfill t r (x, y- ) | otherwise = g where row = g `index` x elem = row `index` y marknextgroup looks for an unmarked group and marks it if found. if no more groups are found it returns nothing. markallgroups then repeatedly applies marknextgroup until nothing is returned. marknextgroup :: int -> grid -> maybe grid marknextgroup i g = case findunmarked g of nothing -> nothing just loc -> just $ floodfill used (mark i) loc g markallgroups :: grid -> grid markallgroups g = markallgroups&# ; g where markallgroups&# ; i g = case marknextgroup i g of nothing -> g just g&# ; -> markallgroups&# ; (i+ ) g&# ; onlymarks filters a grid row and returns a list of (possibly duplicated) group numbers in the row. onlymarks :: gridrow -> [int] onlymarks = mapmaybe getmark . tolist where getmark free = nothing getmark used = nothing getmark (mark i) = just i finally, countgroups puts all the group numbers into a set to get rid of duplicates and returns the size of the set, i.e. the total number of separate groups. countgroups :: grid -> int countgroups g = set.size groupset where groupset = foldl&# ; set.union set.empty $ fmap rowtoset g rowtoset = set.fromlist . tolist . onlymarks as always, every haskell program needs a main function to drive the i/o and produce the actual result. main = do input <- fmap head getargs let grid = makegrid input used = countused grid marked = countgroups $ markallgroups grid putstrln $ &# ;used sectors: &# ; ++ show used putstrln $ &# ;groups: &# ; ++ show marked packet scanners — haskell — #adventofcode day today’s challenge requires us to sneak past a firewall made up of a series of scanners. → full code on github !!! commentary i wasn’t really thinking straight when i solved this challenge. i got a solution without too much trouble, but i ended up simulating the step-by-step movement of the scanners. i finally realised that i could calculate whether or not a given scanner was safe at a given time directly with modular arithmetic, and it bugged me so much that i reimplemented the solution. both are given below, the faster one first. first we introduce some standard library stuff and define some useful utilities. module main where import qualified data.text as t import data.maybe (mapmaybe) strip :: string -> string strip = t.unpack . t.strip . t.pack spliton :: string -> string -> [string] spliton sep = map t.unpack . t.spliton (t.pack sep) . t.pack parsescanner :: string -> (int, int) parsescanner s = (d, r) where [d, r] = map read $ spliton &# ;: &# ; s traversefw does all the hard work: it checks for each scanner whether or not it’s safe as we pass through, and returns a list of the severities of each time we’re caught. mapmaybe is like the standard map in many languages, but operates on a list of haskell maybe values, like a combined map and filter. if the value is just x, x gets included in the returned list; if the value is nothing, then it gets thrown away. traversefw :: int -> [(int, int)] -> [int] traversefw delay = mapmaybe caught where caught (d, r) = if (d + delay) `mod` ( *(r- )) == then just (d * r) else nothing then the total severity of our passage through the firewall is simply the sum of each individual severity. severity :: [(int, int)] -> int severity = sum . traversefw but we don’t want to know how badly we got caught, we want to know how long to wait before setting off to get through safely. finddelay tries traversing the firewall with increasing delay, and returns the delay for the first pass where we predict not getting caught. finddelay :: [(int, int)] -> int finddelay scanners = head $ filter (null . flip traversefw scanners) [ ..] and finally, we put it all together and calculate and print the result. main = do scanners <- fmap (map parsescanner . lines) getcontents putstrln $ &# ;severity: &# ; ++ (show $ severity scanners) putstrln $ &# ;delay: &# ; ++ (show $ finddelay scanners) i’m not generally bothered about performance for these challenges, but here i’ll note that my second attempt runs in a little under seconds on my laptop: $ time ./ -packet-scanners-redux < -input.txt severity: delay: ./ -packet-scanners-redux < -input.txt . s user . s system % cpu . total compare that with the first, simulation-based one, which takes nearly a full minute: $ time ./ -packet-scanners < -input.txt severity: delay: ./ -packet-scanners < -input.txt . s user . s system % cpu . total and for good measure, here’s the code. notice the tick and tickone functions, which together simulate moving all the scanners by one step; for this to work we have to track the full current state of each scanner, which is easier to read with a haskell record-based custom data type. traversefw is more complicated because it has to drive the simulation, but the rest of the code is mostly the same. module main where import qualified data.text as t import control.monad (form_) data scanner = scanner { depth :: int , range :: int , pos :: int , dir :: int } instance show scanner where show (scanner d r p dir) = show d ++ &# ;/&# ; ++ show r ++ &# ;/&# ; ++ show p ++ &# ;/&# ; ++ show dir strip :: string -> string strip = t.unpack . t.strip . t.pack spliton :: string -> string -> [string] spliton sep str = map t.unpack $ t.spliton (t.pack sep) $ t.pack str parsescanner :: string -> scanner parsescanner s = scanner d r where [d, r] = map read $ spliton &# ;: &# ; s tickone :: scanner -> scanner tickone (scanner depth range pos dir) | pos <= = scanner depth range (pos+ ) | pos >= range - = scanner depth range (pos- ) (- ) | otherwise = scanner depth range (pos+dir) dir tick :: [scanner] -> [scanner] tick = map tickone traversefw :: [scanner] -> [(int, int)] traversefw = traversefw&# ; where traversefw&# ; _ [] = [] traversefw&# ; layer scanners@((scanner depth range pos _):rest) -- | layer == depth && pos == = (depth*range) + (traversefw&# ; (layer+ ) $ tick rest) | layer == depth && pos == = (depth,range) : (traversefw&# ; (layer+ ) $ tick rest) | layer == depth && pos /= = traversefw&# ; (layer+ ) $ tick rest | otherwise = traversefw&# ; (layer+ ) $ tick scanners severity :: [scanner] -> int severity = sum . map (uncurry (*)) . traversefw empty :: [a] -> bool empty [] = true empty _ = false finddelay :: [scanner] -> int finddelay scanners = delay where (delay, _) = head $ filter (empty . traversefw . snd) $ zip [ ..] $ iterate tick scanners main = do scanners <- fmap (map parsescanner . lines) getcontents putstrln $ &# ;severity: &# ; ++ (show $ severity scanners) putstrln $ &# ;delay: &# ; ++ (show $ finddelay scanners) digital plumber — python — #adventofcode day today’s challenge has us helping a village of programs who are unable to communicate. we have a list of the the communication channels between their houses, and need to sort them out into groups such that we know that each program can communicate with others in its own group but not any others. then we have to calculate the size of the group containing program and the total number of groups. → full code on github !!! commentary this is one of those problems where i’m pretty sure that my algorithm isn’t close to being the most efficient, but it definitely works! for the sake of solving the challenge that’s all that matters, but it still bugs me. by now i’ve become used to using fileinput to transparently read data either from files given on the command-line or standard input if no arguments are given. import fileinput as fi first we make an initial pass through the input data, creating a group for each line representing the programs on that line (which can communicate with each other). we store this as a python set. groups = [] for line in fi.input(): head, rest = line.split(&# ; <-> &# ;) group = set([int(head)]) group.update([int(x) for x in rest.split(&# ;, &# ;)]) groups.append(group) now we iterate through the groups, starting with the first, and merging any we find that overlap with our current group. i = while i < len(groups): current = groups[i] each pass through the groups brings more programs into the current group, so we have to go through and check their connections too. we make several merge passes, until we detect that no more merges took place. num_groups = len(groups) + while num_groups > len(groups): j = i+ num_groups = len(groups) this inner loop does the actual merging, and deletes each group as it’s merged in. while j < len(groups): if len(current & groups[j]) > : current.update(groups[j]) del groups[j] else: j += i += all that’s left to do now is to display the results. print(&# ;number in group :&# ;, len([g for g in groups if in g][ ])) print(&# ;number of groups:&# ;, len(groups)) hex ed — python — #adventofcode day today’s challenge is to help a program find its child process, which has become lost on a hexagonal grid. we need to follow the path taken by the child (given as input) and calculate the distance it is from home along with the furthest distance it has been at any point along the path. → full code on github !!! commentary i found this one quite interesting in that it was very quick to solve. in fact, i got lucky and my first quick implementation (max(abs(l)) below) gave the correct answer in spite of missing an obvious not-so-edge case. thinking about it, there’s only a ⅓ chance that the first incorrect implementation would give the wrong answer! the code is shorter, so you get more words today. ☺ there are a number of different co-ordinate systems on a hexagonal grid (i discovered while reading up after solving it…). i intuitively went for the system known as ‘axial’ coordinates, where you pick two directions aligned to the grid as your x and y axes: note that these won’t be perpendicular. i chose ne/sw as the x axis and se/nw as y, but there are three other possible choices. that leads to the following definition for the directions, encoded as numpy arrays because that makes some of the code below neater. import numpy as np steps = {d: np.array(v) for d, v in [(&# ;ne&# ;, ( , )), (&# ;se&# ;, ( , - )), (&# ;s&# ;, (- , - )), (&# ;sw&# ;, (- , )), (&# ;nw&# ;, ( , )), (&# ;n&# ;, ( , ))]} hex_grid_dist, given a location l calculates the number of steps needed to reach that location from the centre at ( , ). notice that we can’t simply use the manhattan distance here because, for example, one step north takes us to ( , ), which would give a manhattan distance of . instead, we can see that moving in the n/s direction allows us to increment or decrement both coordinates at the same time: if the coordinates have the same sign: move n/s until one of them is zero, then move along the relevant ne or se axis back to the origin; in this case the number of steps is greatest of the absolute values of the two coordinates if the coordinates have opposite signs: move independently along the ne and se axes to reduce each to ; this time the number of steps is the sum of the absolute values of the two coordinates def hex_grid_distance(l): if sum(np.sign(l)) == : # i.e. opposite signs return sum(abs(l)) else: return max(abs(l)) now we can read in the path followed by the child and follow it ourselves, tracking the maximum distance from home along the way. path = input().strip().split(&# ;,&# ;) location = np.array(( , )) max_distance = for step in map(steps.get, path): location += step max_distance = max(max_distance, hex_grid_distance(location)) distance = hex_grid_distance(location) print(&# ;child process is at&# ;, location, &# ;which is&# ;, distance, &# ;steps away&# ;) print(&# ;greatest distance was&# ;, max_distance) knot hash — haskell — #adventofcode day today’s challenge asks us to help a group of programs implement a (highly questionable) hashing algorithm that involves repeatedly reversing parts of a list of numbers. → full code on github !!! commentary i went with haskell again today, because it’s the weekend so i have a bit more time, and i really enjoyed yesterday’s haskell implementation. today gave me the opportunity to explore the standard library a bit more, as well as lending itself nicely to being decomposed into smaller parts to be combined using higher-order functions. you know the drill by know: import stuff we’ll use later. module main where import data.char (ord) import data.bits (xor) import data.function ((&)) import data.list (unfoldr) import text.printf (printf) import qualified data.text as t the worked example uses a concept of the “current position” as a pointer to a location in a static list. in haskell it makes more sense to instead use the front of the list as the current position, and rotate the whole list as we progress to bring the right element to the front. rotate :: int -> [int] -> [int] rotate xs = xs rotate n xs = drop n&# ; xs ++ take n&# ; xs where n&# ; = n `mod` length xs the simple version of the hash requires working through the input list, modifying the working list as we go, and incrementing a “skip” counter with each step. converting this to a functional style, we simply zip up the input with an infinite list [ , , , , ...] to give the counter values. notice that we also have to calculate how far to rotate the working list to get back to its original position. foldl lets us specify a function that returns a modified version of the working list and feeds the input list in one at a time. simpleknothash :: int -> [int] -> [int] simpleknothash size input = foldl step [ ..size- ] input&# ; & rotate (negate finalpos) where input&# ; = zip input [ ..] finalpos = sum $ zipwith (+) input [ ..] reversepart xs n = (reverse $ take n xs) ++ drop n xs step xs (n, skip) = reversepart xs n & rotate (n+skip) the full version of the hash (part of the challenge) starts the same way as the simple version, except making passes instead of one: we can do this by using replicate to make a list of copies, then collapse that into a single list with concat. fullknothash :: int -> [int] -> [int] fullknothash size input = simpleknothash size input&# ; where input&# ; = concat $ replicate input the next step in calculating the full hash collapses the full -element “sparse” hash down into elements by xoring groups of together. unfoldr is a nice efficient way of doing this. dense :: [int] -> [int] dense = unfoldr dense&# ; where dense&# ; [] = nothing dense&# ; xs = just (foldl xor $ take xs, drop xs) the final hash step is to convert the list of integers into a hexadecimal string. hexify :: [int] -> string hexify = concatmap (printf &# ;% x&# ;) these two utility functions put together building blocks from the data.text module to parse the input string. note that no arguments are given: the functions are defined purely by composing other functions using the . operator. in haskell this is referred to as “point-free” style. strip :: string -> string strip = t.unpack . t.strip . t.pack parseinput :: string -> [int] parseinput = map (read . t.unpack) . t.spliton (t.singleton &# ;,&# ;) . t.pack now we can put it all together, including building the weird input for the “full” hash. main = do input <- fmap strip getcontents let simpleinput = parseinput input asciiinput = map ord input ++ [ , , , , ] (a:b:_) = simpleknothash simpleinput print $ (a*b) putstrln $ fullknothash asciiinput & dense & hexify stream processing — haskell — #adventofcode day in today’s challenge we come across a stream that we need to cross. but of course, because we’re stuck inside a computer, it’s not water but data flowing past. the stream is too dangerous to cross until we’ve removed all the garbage, and to prove we can do that we have to calculate a score for the valid data “groups” and the number of garbage characters to remove. → full code on github !!! commentary one of my goals for this process was to knock the rust of my functional programming skills in haskell, and i haven’t done that for the whole of the first week. processing strings character by character and acting according to which character shows up seems like a good choice for pattern-matching though, so here we go. i also wanted to take a bash at test-driven development in haskell, so i also loaded up the test.hspec module to give it a try. i did find keeping track of all the state in arguments a bit mind boggling, and i think it could have been improved through use of a data type using record syntax and the `state` monad, so that's something to look at for a future challenge. first import the extra bits we’ll need. module main where import test.hspec import data.function ((&)) countgroups solves the first part of the problem, counting up the “score” of the valid data in the stream. countgroups' is an auxiliary function that holds some state in its arguments. we use pattern matching for the base case: [] represents the empty list in haskell, which indicates we’ve finished the whole stream. otherwise, we split the remaining stream into its first character and remainder, and use guards to decide how to interpret it. if skip is true, discard the character and carry on with skip set back to false. if we find a “!”, that tells us to skip the next. other characters mark groups or sets of garbage: groups increase the score when they close and garbage is discarded. we continue to progress the list by recursing with the remainder of the stream and any updated state. countgroups :: string -> int countgroups = countgroups&# ; false false where countgroups&# ; score _ _ _ [] = score countgroups&# ; score level garbage skip (c:rest) | skip = countgroups&# ; score level garbage false rest | c == &# ;!&# ; = countgroups&# ; score level garbage true rest | garbage = case c of &# ;>&# ; -> countgroups&# ; score level false false rest _ -> countgroups&# ; score level true false rest | otherwise = case c of &# ;{&# ; -> countgroups&# ; score (level+ ) false false rest &# ;}&# ; -> countgroups&# ; (score+level) (level- ) false false rest &# ;,&# ; -> countgroups&# ; score level false false rest &# ;<&# ; -> countgroups&# ; score level true false rest c -> error $ &# ;garbage character found outside garbage: &# ; ++ show c countgarbage works almost identically to countgroups, except it ignores groups and counts garbage. they are structured so similarly that it would probably make more sense to combine them to a single function that returns both counts. countgarbage :: string -> int countgarbage = countgarbage&# ; false false where countgarbage&# ; count _ _ [] = count countgarbage&# ; count garbage skip (c:rest) | skip = countgarbage&# ; count garbage false rest | c == &# ;!&# ; = countgarbage&# ; count garbage true rest | garbage = case c of &# ;>&# ; -> countgarbage&# ; count false false rest _ -> countgarbage&# ; (count+ ) true false rest | otherwise = case c of &# ;<&# ; -> countgarbage&# ; count true false rest _ -> countgarbage&# ; count false false rest hspec gives us a domain-specific language heavily inspired by the rspec library for ruby: the tests read almost like natural language. i built up these tests one-by-one, gradually implementing the appropriate bits of the functions above, a process known as test-driven development. runtests = hspec $ do describe &# ;countgroups&# ; $ do it &# ;counts valid groups&# ; $ do countgroups &# ;{}&# ; `shouldbe` countgroups &# ;{{{}}}&# ; `shouldbe` countgroups &# ;{{{},{},{{}}}}&# ; `shouldbe` countgroups &# ;{{},{}}&# ; `shouldbe` it &# ;ignores garbage&# ; $ do countgroups &# ;{<a>,<a>,<a>,<a>}&# ; `shouldbe` countgroups &# ;{{<ab>},{<ab>},{<ab>},{<ab>}}&# ; `shouldbe` it &# ;skips marked characters&# ; $ do countgroups &# ;{{<!!>},{<!!>},{<!!>},{<!!>}}&# ; `shouldbe` countgroups &# ;{{<a!>},{<a!>},{<a!>},{<ab>}}&# ; `shouldbe` describe &# ;countgarbage&# ; $ do it &# ;counts garbage characters&# ; $ do countgarbage &# ;<>&# ; `shouldbe` countgarbage &# ;<random characters>&# ; `shouldbe` countgarbage &# ;<<<<>&# ; `shouldbe` it &# ;ignores non-garbage&# ; $ do countgarbage &# ;{{},{}}&# ; `shouldbe` countgarbage &# ;{{<ab>},{<ab>},{<ab>},{<ab>}}&# ; `shouldbe` it &# ;skips marked characters&# ; $ do countgarbage &# ;<{!>}>&# ; `shouldbe` countgarbage &# ;<!!>&# ; `shouldbe` countgarbage &# ;<!!!>&# ; `shouldbe` countgarbage &# ;<{o\&# ;i!a,<{i<a>&# ; `shouldbe` finally, the main function reads in the challenge input and calculates the answers, printing them on standard output. main = do runtests repeat &# ;=&# ; & take & putstrln input <- getcontents & fmap (filter (/=&# ;\n&# ;)) putstrln $ &# ;found &# ; ++ show (countgroups input) ++ &# ; groups&# ; putstrln $ &# ;found &# ; ++ show (countgarbage input) ++ &# ; characters garbage&# ; i heard you like registers — python — #adventofcode day today’s challenge describes a simple instruction set for a cpu, incrementing and decrementing values in registers according to simple conditions. we have to interpret a stream of these instructions, and to prove that we’ve done so, give the highest value of any register, both at the end of the program and throughout the whole program. → full code on github !!! commentary this turned out to be a nice straightforward one to implement, as the instruction format was easily parsed by regular expression, and python provides the eval function which made evaluating the conditions a doddle. import various standard library bits that we’ll use later. import re import fileinput as fi from math import inf from collections import defaultdict we could just parse the instructions by splitting the string, but using a regular expression is a little bit more robust because it won’t match at all if given an invalid instruction. instruction_re = re.compile(r&# ;(\w+) (inc|dec) (-?\d+) if (.+)\s*&# ;) def parse_instruction(instruction): match = instruction_re.match(instruction) return match.group( , , , ) executing an instruction simply checks the condition and if it evaluates to true updates the relevant register. def exec_instruction(registers, instruction): name, op, value, cond = instruction value = int(value) if op == &# ;dec&# ;: value = -value if eval(cond, globals(), registers): registers[name] += value highest_value returns the maximum value found in any register. def highest_value(registers): return sorted(registers.items(), key=lambda x: x[ ], reverse=true)[ ][ ] finally, loop through all the instructions and carry them out, updating global_max as we go. we need to be able to deal with registers that haven’t been accessed before. keeping the registers in a dictionary means that we can evaluate the conditions directly using eval above, passing it as the locals argument. the standard dict will raise an exception if we try to access a key that doesn’t exist, so instead we use collections.defaultdict, which allows us to specify what the default value for a non-existent key will be. new registers start at , so we use a simple lambda to define a function that always returns . global_max = -inf registers = defaultdict(lambda: ) for i in map(parse_instruction, fi.input()): exec_instruction(registers, i) global_max = max(global_max, highest_value(registers)) print(&# ;max value:&# ;, highest_value(registers)) print(&# ;all-time max:&# ;, global_max) recursive circus — ruby — #adventofcode day today’s challenge introduces a set of processes balancing precariously on top of each other. we find them stuck and unable to get down because one of the processes is the wrong size, unbalancing the whole circus. our job is to figure out the root from the input and then find the correct weight for the single incorrect process. → full code on github !!! commentary so i didn’t really intend to take a full polyglot approach to advent of code, but it turns out to have been quite fun, so i made a shortlist of languages to try. building a tree is a classic application for object-orientation using a class to represent tree nodes, and i’ve always liked the feel of ruby’s class syntax, so i gave it a go. first make sure we have access to set, which we’ll use later. require &# ;set&# ; now to define the circusnode class, which represents nodes in the tree. attr :s automatically creates a function s that returns the value of the instance attribute @s class circusnode attr :name, :weight def initialize(name, weight, children=nil) @name = name @weight = weight @children = children || [] end add a << operator (the same syntax for adding items to a list) that adds a child to this node. def <<(c) @children << c @total_weight = nil end total_weight recursively calculates the weight of this node and everything above it. the @total_weight ||= blah idiom caches the value so we only calculate it once. def total_weight @total_weight ||= @weight + @children.map {|c| c.total_weight}.sum end balance_weight does the hard work of figuring out the proper weight for the incorrect node by recursively searching through the tree. def balance_weight(target=nil) by_weight = hash.new{|h, k| h[k] = []} @children.each{|c| by_weight[c.total_weight] << c} if by_weight.size == then if target return @weight - (total_weight - target) else raise argumenterror, &# ;this tree seems balanced!&# ; end else odd_one_out = by_weight.select {|k, v| v.length == }.first[ ][ ] child_target = by_weight.select {|k, v| v.length > }.first[ ] return odd_one_out.balance_weight child_target end end a couple of utility functions for displaying trees finish off the class. def to_s &# ;#{@name} (#{@weight})&# ; end def print_tree(n= ) puts &# ;#{&# ; &# ;*n}#{self} -> #{self.total_weight}&# ; @children.each do |child| child.print_tree n+ end end end build_circus takes input as a list of lists [name, weight, children]. we make two passes over this list, first creating all the nodes, then building the tree by adding children to parents. def build_circus(data) all_nodes = {} all_children = set.new data.each do |name, weight, children| all_nodes[name] = circusnode.new name, weight end data.each do |name, weight, children| children.each {|child| all_nodes[name] << all_nodes[child]} all_children.merge children end root_name = (all_nodes.keys.to_set - all_children).first return all_nodes[root_name] end finally, build the tree and solve the problem! note that we use string.to_sym to convert the node names to symbols (written in ruby as :symbol), because they’re faster to work with in hashes and sets as we do above. data = readlines.map do |line| match = /(?<parent>\w+) \((?<weight>\d+)\)(?: -> (?<children>.*))?/.match line [match[&# ;parent&# ;].to_sym, match[&# ;weight&# ;].to_i, match[&# ;children&# ;] ? match[&# ;children&# ;].split(&# ;, &# ;).map {|x| x.to_sym} : []] end root = build_circus data puts &# ;root node: #{root}&# ; puts root.balance_weight memory reallocation — python — #adventofcode day today’s challenge asks us to follow a recipe for redistributing objects in memory that bears a striking resemblance to the rules of the african game mancala. → full code on github !!! commentary when i was doing my msci, one of our programming exercises was to write (in haskell, iirc) a program to play a mancala variant called oware, so this had a nice ring of nostalgia. back to python today: it's already become clear that it's by far my most fluent language, which makes sense as it's the only one i've used consistently since my schooldays. i'm a bit behind on the blog posts, so you get this one without any explanation, for now at least! import math def reallocate(mem): max_val = -math.inf size = len(mem) for i, x in enumerate(mem): if x > max_val: max_val = x max_index = i i = max_index mem[i] = remaining = max_val while remaining > : i = (i + ) % size mem[i] += remaining -= return mem def detect_cycle(mem): mem = list(mem) steps = prev_states = {} while tuple(mem) not in prev_states: prev_states[tuple(mem)] = steps steps += mem = reallocate(mem) return (steps, steps - prev_states[tuple(mem)]) initial_state = map(int, input().split()) print(&# ;initial state is &# ;, initial_state) steps, cycle = detect_cycle(initial_state) print(&# ;steps to cycle: &# ;, steps) print(&# ;steps in cycle: &# ;, cycle) a maze of twisty trampolines — c++ — #adventofcode day today’s challenge has us attempting to help the cpu escape from a maze of instructions. it’s not quite a turing machine, but it has that feeling of moving a read/write head up and down a tape acting on and changing the data found there. → full code on github !!! commentary i haven’t written anything in c++ for over a decade. it sounds like there have been lots of interesting developments in the language since then, with c++ , c++ and the freshly finalised c++ standards (built-in parallelism in the stl!). i won’t use any of those, but i thought i’d dust off my c++ and see what happened. thankfully the standard template library classes still did what i expected! as usual, we first include the parts of the standard library we’re going to use: iostream for input & output; vector for the container. we also declare that we’re using the std namespace, so that we don’t have to prepend vector and the other classes with std::. #include <iostream> #include <vector> using namespace std; steps_to_escape_part implements part of the challenge: we read a location, move forward/backward by the number of steps given in that location, then add one to the location before repeating. the result is the number of steps we take before jumping outside the list. int steps_to_escape_part (vector<int>& instructions) { int pos = , iterations = , new_pos; while (pos < instructions.size()) { new_pos = pos + instructions[pos]; instructions[pos]++; pos = new_pos; iterations++; } return iterations; } steps_to_escape_part solves part , which is very similar, except that an offset greater than is decremented instead of incremented before moving on. int steps_to_escape_part (vector<int>& instructions) { int pos = , iterations = , new_pos, offset; while (pos < instructions.size()) { offset = instructions[pos]; new_pos = pos + offset; instructions[pos] += offset >= ? - : ; pos = new_pos; iterations++; } return iterations; } finally we pull it all together and link it up to the input. int main() { vector<int> instructions , instructions ; int n; the cin class lets us read data from standard input, which we then add to a vector of ints to give our list of instructions. while (true) { cin >> n; if (cin.eof()) break; instructions .push_back(n); } solving the problem modifies the input, so we need to take a copy to solve part as well. thankfully the stl makes this easy with iterators. instructions .insert(instructions .begin(), instructions .begin(), instructions .end()); finally, compute the result and print it on standard output. cout << steps_to_escape_part (instructions ) << endl; cout << steps_to_escape_part (instructions ) << endl; return ; } high entropy passphrases — python — #adventofcode day today’s challenge describes some simple rules supposedly intended to enforce the use of secure passwords. all we have to do is test a list of passphrase and identify which ones meet the rules. → full code on github !!! commentary fearing that today might be as time-consuming as yesterday, i returned to python and it’s hugely powerful “batteries-included” standard library. thankfully this challenge was more straightforward, and i actually finished this before finishing day . first, let’s import two useful utilities. from fileinput import input from collections import counter part requires simply that a passphrase contains no repeated words. no problem: we split the passphrase into words and count them, and check if any was present more than once. counter is an amazingly useful class to have in a language’s standard library. all it does is count things: you add objects to it, and then it will tell you how many of a given object you have. we’re going to use it to count those potentially duplicated words. def is_valid(passphrase): counter = counter(passphrase.split()) return counter.most_common( )[ ][ ] == part requires that no word in the passphrase be an anagram of any other word. since we don’t need to do anything else with the words afterwards, we can check for anagrams by sorting the letters in each word: “leaf” and “flea” both become “aefl” and can be compared directly. then we count as before. def is_valid_ana(passphrase): counter = counter(&# ;&# ;.join(sorted(word)) for word in passphrase.split()) return counter.most_common( )[ ][ ] == finally we pull everything together. sum(map(boolean_func, list)) is a common idiom in python for counting the number of times a condition (checked by boolean_func) is true. in python, true and false can be treated as the numbers and respectively, so that summing a list of boolean values gives you the number of true values in the list. lines = list(input()) print(sum(map(is_valid, lines))) print(sum(map(is_valid_ana, lines))) spiral memory — go — #adventofcode day today’s challenge requires us to perform some calculations on an “experimental memory layout”, with cells moving outwards from the centre of a square spiral (squiral?). → full code on github !!! commentary i’ve been wanting to try my hand at go, the memory-safe, statically typed compiled language from google for a while. today’s challenge seemed a bit more mathematical in nature, meaning that i wouldn’t need too many advanced language features or knowledge of a standard library, so i thought i’d give it a “go”. it might have been my imagination, but it was impressive how quickly the compiled program chomped through different input values while i was debugging. i actually spent far too long on this problem because my brain led me down a blind alley trying to do the wrong calculation, but i got there in the end! the solution is a bit difficult to explain without diagrams, which i don't really have time to draw right now, but fear not because several other people have. first take a look at [the challenge itself which explains the spiral memory concept](http://adventofcode.com/ /day/ ). then look at the [nice diagrams that phil tooley made with python](http://acceleratedscience.co.uk/blog/adventofcode-day- -spiral-memory/) and hopefully you'll be able to see what's going on! it's interesting to note that this challenge also admits of an algorithmic solution instead of the mathematical one: you can model the memory as an infinite grid using a suitable data structure and literally move around it in a spiral. in hindsight this is a much better way of solving the challenge quickly because it's easier and less error-prone to code. i'm quite pleased with my maths-ing though, and it's much quicker than the algorithmic version! first some go boilerplate: we have to define the package we’re in (main, because it’s an executable we’re producing) and import the libraries we’ll use. package main import ( &# ;fmt&# ; &# ;math&# ; &# ;os&# ; ) weirdly, go doesn’t seem to have these basic mathematics functions for integers in its standard library (please someone correct me if i’m wrong!) so i’ll define them instead of mucking about with data types. go doesn’t do any implicit type conversion, even between numeric types, and the math builtin package only operates on float values. func abs(n int) int { if n < { return -n } return n } func min(x, y int) int { if x < y { return x } return y } func max(x, y int) int { if x > y { return x } return y } this does the heavy lifting for part one: converting from a position on the spiral to a column and row in the grid. ( , ) is the centre of the spiral. this actually does a bit more than is necessary to calculate the distance as required for part , but we’ll use it again for part . func spiral_to_xy(n int) (int, int) { if n == { return , } r := int(math.floor((math.sqrt(float (n- )) + ) / )) n_r := n - ( *r- )*( *r- ) o := ((n_r - ) % ( * r)) - r + sector := (n_r - ) / ( * r) switch sector { case : return r, o case : return -o, r case : return -r, -o case : return o, -r } return , } now use spiral_to_xy to calculate the manhattan distance that the value at location n in the spiral memory are carried to reach the “access port” at . func distance(n int) int { x, y := spiral_to_xy(n) return abs(x) + abs(y) } this function does the opposite of spiral_to_xy, translating a grid position back to its position on the spiral. this is the one that took me far too long to figure out because i had a brain bug and tried to calculate the value s (which sector or quarter of the spiral we’re looking at) in a way that was never going to work! fortunately i came to my senses. func xy_to_spiral(x, y int) int { if x == && y == { return } r := max(abs(x), abs(y)) var s, o, n int if x+y > && x-y >= { s = } else if x-y < && x+y >= { s = } else if x+y < && x-y <= { s = } else { s = } switch s { case : o = y case : o = -x case : o = -y case : o = x } n = o + r*( *s+ ) + ( *r- )*( *r- ) return n } this is a utility function that uses xy_to_spiral to fetch the value at a given (x, y) location, and returns zero if we haven’t filled that location yet. func get_spiral(mem []int, x, y int) int { n := xy_to_spiral(x, y) - if n < len(mem) { return mem[n] } return } finally we solve part of the problem, which involves going round the spiral writing values into it that are the sum of some values already written. the result is the first of these sums that is greater than or equal to the given input value. func stress_test(input int) int { mem := make([]int, ) n := mem[ ] = for mem[n] < input { n++ x, y := spiral_to_xy(n + ) mem = append(mem, get_spiral(mem, x+ , y)+ get_spiral(mem, x+ , y+ )+ get_spiral(mem, x, y+ )+ get_spiral(mem, x- , y+ )+ get_spiral(mem, x- , y)+ get_spiral(mem, x- , y- )+ get_spiral(mem, x, y- )+ get_spiral(mem, x+ , y- )) } return mem[n] } now the last part of the program puts it all together, reading the input value from a commandline argument and printing the results of the two parts of the challenge: func main() { var n int fmt.sscanf(os.args[ ], &# ;%d&# ;, &n) fmt.printf(&# ;input is %d\n&# ;, n) fmt.printf(&# ;distance is %d\n&# ;, distance(n)) fmt.printf(&# ;stress test result is %d\n&# ;, stress_test(n)) } corruption checksum — python — #adventofcode day today’s challenge is to calculate a rather contrived “checksum” over a grid of numbers. → full code on github !!! commentary today i went back to plain python, and i didn’t do formal tests because only one test case was given for each part of the problem. i just got stuck in. i did write part out in as nested `for` loops as an intermediate step to working out the generator expression. i think that expanded version may have been more readable. having got that far, i couldn't then work out how to finally eliminate the need for an auxiliary function entirely without either sorting the same elements multiple times or sorting each row as it's read. first we read in the input, split it and convert it to numbers. fileinput.input() returns an iterator over the lines in all the files passed as command-line arguments, or over standard input if no files are given. from fileinput import input sheet = [[int(x) for x in l.split()] for l in input()] part of the challenge calls for finding the difference between the largest and smallest number in each row, and then summing those differences: print(sum(max(x) - min(x) for x in sheet)) part is a bit more involved: for each row we have to find the unique pair of elements that divide into each other without remainder, then sum the result of those divisions. we can make it a little easier by sorting each row; then we can take each number in turn and compare it only with the numbers after it (which are guaranteed to be larger). doing this ensures we only make each comparison once. def rowsum_div(row): row = sorted(row) return sum(y // x for i, x in enumerate(row) for y in row[i+ :] if y % x == ) print(sum(map(rowsum_div, sheet))) we can make this code shorter (if not easier to read) by sorting each row as it’s read: sheet = [sorted(int(x) for x in l.split()) for l in input()] then we can just use the first and last elements in each row for part , as we know those are the smallest and largest respectively in the sorted row: print(sum(x[- ] - x[ ] for x in sheet)) part then becomes a sum over a single generator expression: print(sum(y // x for row in sheet for i, x in enumerate(row) for y in row[i+ :] if y % x == )) very satisfying! inverse captcha — coconut — #adventofcode day well, december’s here at last, and with it day of advent of code. … it goes on to explain that you may only leave by solving a captcha to prove you’re not a human. apparently, you only get one millisecond to solve the captcha: too fast for a normal human, but it feels like hours to you. … as well as posting solutions here when i can, i’ll be putting them all on https://github.com/jezcope/aoc too. !!! commentary after doing some challenges from last year in haskell for a warm up, i felt inspired to try out the functional-ish python dialect, coconut. now that i’ve done it, it feels a bit of an odd language, neither fish nor fowl. it’ll look familiar to any pythonista, but is loaded with features normally associated with functional languages, like pattern matching, destructuring assignment, partial application and function composition. that makes it quite fun to work with, as it works similarly to haskell, but because it's restricted by the basic rules of python syntax everything feels a bit more like hard work than it should. the accumulator approach feels clunky, but it's necessary to allow [tail call elimination](https://en.wikipedia.org/wiki/tail_call), which coconut will do and i wanted to see in action. lo and behold, if you take a look at the [compiled python version](https://github.com/jezcope/aoc /blob/ c bda b e db e d b be a / -inverse-captcha.py#l ) you'll see that my recursive implementation has been turned into a non-recursive `while` loop. then again, maybe i'm just jealous of phil tooley's [one-liner solution in python](https://github.com/ptooley/aocgolf/blob/ d f ccfc cfd baf d / .py#l ). import sys def inverse_captcha_(s, acc= ): case reiterable(s): match (|d, d|) :: rest: return inverse_captcha_((|d|) :: rest, acc + int(d)) match (|d , d |) :: rest: return inverse_captcha_((|d |) :: rest, acc) return acc def inverse_captcha(s) = inverse_captcha_(s :: s[ ]) def inverse_captcha_ _(s , s , acc= ): case (reiterable(s ), reiterable(s )): match ((|d |) :: rest , (|d |) :: rest ): return inverse_captcha_ _(rest , rest , acc + int(d )) match ((|d |) :: rest , (|d |) :: rest ): return inverse_captcha_ _(rest , rest , acc) return acc def inverse_captcha_ (s) = inverse_captcha_ _(s, s$[len(s)// :] :: s) def test_inverse_captcha(): assert " " |> inverse_captcha == assert " " |> inverse_captcha == assert " " |> inverse_captcha == assert " " |> inverse_captcha == def test_inverse_captcha_ (): assert " " |> inverse_captcha_ == assert " " |> inverse_captcha_ == assert " " |> inverse_captcha_ == assert " " |> inverse_captcha_ == assert " " |> inverse_captcha_ == if __name__ == "__main__": sys.argv[ ] |> inverse_captcha |> print sys.argv[ ] |> inverse_captcha_ |> print advent of code : introduction it’s a common lament of mine that i don’t get to write a lot of code in my day-to-day job. i like the feeling of making something from nothing, and i often look for excuses to write bits of code, both at work and outside it. advent of code is a daily series of programming challenges for the month of december, and is about to start its third annual incarnation. i discovered it too late to take part in any serious way last year, but i’m going to give it a try this year. there are no restrictions on programming language (so of course some people delight in using esoteric languages like brainf**k), but i think i’ll probably stick with python for the most part. that said, i miss my haskell days and i’m intrigued by new kids on the block go and rust, so i might end up throwing in a few of those on some of the simpler challenges. i’d like to focus a bit more on how i solve the puzzles. they generally come in two parts, with the second part only being revealed after successful completion of the first part. with that in mind, test-driven development makes a lot of sense, because i can verify that i haven’t broken the solution to the first part in modifying to solve the second. i may also take a literate programming approach with org-mode or jupyter notebooks to document my solutions a bit more, and of course that will make it easier to publish solutions here so i’ll do that as much as i can make time for. on that note, here are some solutions for that i’ve done recently as a warmup. day : python day instructions import numpy as np import pytest as t import sys turn = { &# ;l&# ;: np.array([[ , ], [- , ]]), &# ;r&# ;: np.array([[ , - ], [ , ]]) } origin = np.array([ , ]) north = np.array([ , ]) class santa: def __init__(self, location, heading): self.location = np.array(location) self.heading = np.array(heading) self.visited = [( , )] def execute_one(self, instruction): start_loc = self.location.copy() self.heading = self.heading @ turn[instruction[ ]] self.location += self.heading * int(instruction[ :]) self.mark(start_loc, self.location) def execute_many(self, instructions): for i in instructions.split(&# ;,&# ;): self.execute_one(i.strip()) def distance_from_start(self): return sum(abs(self.location)) def mark(self, start, end): for x in range(min(start[ ], end[ ]), max(start[ ], end[ ])+ ): for y in range(min(start[ ], end[ ]), max(start[ ], end[ ])+ ): if any((x, y) != start): self.visited.append((x, y)) def find_first_crossing(self): for i in range( , len(self.visited)): for j in range(i): if self.visited[i] == self.visited[j]: return self.visited[i] def distance_to_first_crossing(self): crossing = self.find_first_crossing() if crossing is not none: return abs(crossing[ ]) + abs(crossing[ ]) def __str__(self): return f&# ;santa @ {self.location}, heading {self.heading}&# ; def test_execute_one(): s = santa(origin, north) s.execute_one(&# ;l &# ;) assert all(s.location == np.array([- , ])) assert all(s.heading == np.array([- , ])) s.execute_one(&# ;l &# ;) assert all(s.location == np.array([- , - ])) assert all(s.heading == np.array([ , - ])) s.execute_one(&# ;r &# ;) assert all(s.location == np.array([- , - ])) assert all(s.heading == np.array([- , ])) s.execute_one(&# ;r &# ;) assert all(s.location == np.array([- , ])) assert all(s.heading == np.array([ , ])) def test_execute_many(): s = santa(origin, north) s.execute_many(&# ;l , l , r &# ;) assert all(s.location == np.array([- , - ])) assert all(s.heading == np.array([- , ])) def test_distance(): assert santa(origin, north).distance_from_start() == assert santa(( , ), north).distance_from_start() == assert santa((- , ), north).distance_from_start() == def test_turn_left(): east = north @ turn[&# ;l&# ;] south = east @ turn[&# ;l&# ;] west = south @ turn[&# ;l&# ;] assert all(east == np.array([- , ])) assert all(south == np.array([ , - ])) assert all(west == np.array([ , ])) def test_turn_right(): west = north @ turn[&# ;r&# ;] south = west @ turn[&# ;r&# ;] east = south @ turn[&# ;r&# ;] assert all(east == np.array([- , ])) assert all(south == np.array([ , - ])) assert all(west == np.array([ , ])) if __name__ == &# ;__main__&# ;: instructions = sys.stdin.read() santa = santa(origin, north) santa.execute_many(instructions) print(santa) print(&# ;distance from start:&# ;, santa.distance_from_start()) print(&# ;distance to target: &# ;, santa.distance_to_first_crossing()) day : haskell day instructions module main where data pos = pos int int deriving (show) -- magrittr-style pipe operator (|>) :: a -> (a -> b) -> b x |> f = f x swappos :: pos -> pos swappos (pos x y) = pos y x clamp :: int -> int -> int -> int clamp lower upper x | x < lower = lower | x > upper = upper | otherwise = x clamph :: pos -> pos clamph (pos x y) = pos x&# ; y&# ; where y&# ; = clamp y r = abs ( - y&# ;) x&# ; = clamp r ( -r) x clampv :: pos -> pos clampv = swappos . clamph . swappos buttonforpos :: pos -> string buttonforpos (pos x y) = [buttons !! y !! x] where buttons = [&# ; d &# ;, &# ; abc &# ;, &# ; &# ;, &# ; &# ;, &# ; &# ;] decodechar :: pos -> char -> pos decodechar (pos x y) &# ;r&# ; = clamph $ pos (x+ ) y decodechar (pos x y) &# ;l&# ; = clamph $ pos (x- ) y decodechar (pos x y) &# ;u&# ; = clampv $ pos x (y+ ) decodechar (pos x y) &# ;d&# ; = clampv $ pos x (y- ) decodeline :: pos -> string -> pos decodeline p &# ;&# ; = p decodeline p (c:cs) = decodeline (decodechar p c) cs makecode :: string -> string makecode instructions = lines instructions -- split into lines |> scanl decodeline (pos ) -- decode to positions |> tail -- drop start position |> concatmap buttonforpos -- convert to buttons main = do input <- getcontents putstrln $ makecode input research data management forum , manchester !!! intro "" monday and tuesday november i’m at the research data management forum in manchester. i thought i’d use this as an opportunity to try liveblogging, so during the event some notes should appear in the box below (you may have to manually refresh your browser tab periodically to get the latest version). i've not done this before, so if the blog stops updating then it's probably because i've stopped updating it to focus on the conference instead! this was made possible using github's cool [gist](https://gist.github.com) tool. draft content policy i thought it was about time i had some sort of content policy on here so this is a first draft. it will eventually wind up as a separate page. feedback welcome! !!! aside “content policy” this blog’s primary purpose is as a reflective learning tool for my own development; my aim in writing any given post is mainly to expose and develop my own thinking on a topic. my reasons for making a public blog rather than a private journal are: . if i'm lucky, someone smarter than me will provide feedback that will help me and my readers to learn more . if i'm extra lucky, someone else might learn from the material as well each post, therefore, represents the state of my thinking at the time i wrote it, or perhaps a deliberate provocation or exaggeration; either way, if you don't know me personally please don't judge me based entirely on my past words. this is a request though, not an attempt to excuse bad behaviour on my part. i accept full responsibility for any consequences of my words, whether intended or not. i will not remove comments or ban individuals for disagreeing with me, only for behaving offensively or disrespectfully. i will do my best to be fair and balanced and explain decisions that i take, but i reserve the right to take those decisions without making any explanation at all if it seems likely to further inflame a situation. if i end up responding to anything simply with a link to this policy, that's probably all the explanation you're going to get. it should go without saying, but the opinions presented in this blog are my own and not those of my employer or anyone else i might at times represent. learning to live with anxiety !!! intro "" this is a post that i’ve been writing for months, and writing in my head for years. for some it will explain aspects of my personality that you might have wondered about. for some it will just be another person banging on self-indulgently about so-called “mental health issues”. hopefully, for some it will demystify some stuff and show that you’re not alone and things do get better. for as long as i can remember i’ve been a worrier. i’ve also suffered from bouts of what i now recognise as depression, on and off since my school days. it’s only relatively recently that i’ve come to the realisation that these two might be connected and that my ‘worrying’ might in fact be outside the normal range of healthy human behaviour and might more accurately be described as chronic anxiety. you probably won’t have noticed it, but it’s been there. more recently i’ve begun feeling like i’m getting on top of it and feeling “normal” for the first time in my life. things i’ve found that help include: getting out of the house more and socialising with friends; and getting a range of exercise, outdoors and away from the city (rock climbing is mentally and physically engaging and open water swimming is indescribably joyful). but mostly it’s the cognitive behavioural therapy (cbt) and the antidepressants. before i go any further, a word about drugs (“don’t do drugs, kids”): i’m on the lowest available dose of a common antidepressant. this isn’t because it stops me being sad all the time (i’m not) or because it makes all my problems go away (it really doesn’t). it’s because the scientific evidence points to a combination of cbt and antidepressants as being the single most effective treatment for generalised anxiety disorder. the reason for this is simple: cbt isn’t easy, because it asks you to challenge habits and beliefs you’ve held your whole life. in the short term there is going to be more anxiety and some antidepressants are also effective at blunting the effect of this additional anxiety. in short, cbt is what makes you better, and the drugs just make it a little bit more effective. a lot of people have misconceptions about what it means to be ‘in therapy’. i suspect a lot of these are derived from the psychoanalysis we often see portrayed in (primarily us) film and tv. the problem with that type of navel-gazing therapy is that you can spend years doing it, finally reach some sort of breakthrough insight, and still not have no idea what the supposed insight means for your actual life. cbt is different in that rather than addressing feelings directly it focuses on habits in your thoughts (cognitive) and actions (behavioural) with feeling better as an outcome (therapy). cbt and related forms of therapy now have decades of clinical evidence showing that they really work. it uses a wide range of techniques to identify, challenge and reduce various common unhelpful thoughts and behaviours. by choosing and practicing these, you can break bad mental habits that you’ve been carrying around, often for decades. for me this means giving fair weight to my successes as well as my failings, allowing flexibility into the rigid rules that i have always, subconsciously, lived by, and being a bit kinder to myself when i make mistakes. it’s not been easy and i have to remind myself to practice this every day, but it’s really helped. !!! aside “more info” if you live in the uk, you might not be aware that you can get cbt and other psychological therapies on the nhs through a scheme called iapt (improving access to psychological therapies). you can self-refer so you don’t need to see a doctor first, but you might want to anyway if you think medication might help. they also have a progression of treatments, so you might be offered a course of “guided self-help” and then progressed to cbt or another talking therapy if need be. this is what happened to me, and it did help a bit but it was cbt that helped me the most. becoming a librarian what is a librarian? is it someone who has a masters degree in librarianship and information science? is it someone who looks after information for other people? is it simply someone who works in a library? i’ve been grappling with this question a lot lately because i’ve worked in academic libraries for about years now and i never really thought that’s something that might happen. people keep referring to me as “a librarian” but there’s some imposter feelings here because all the librarians around me have much more experience, have skills in areas like cataloguing and collection management and, generally, have a librarian masters degree. so i’ve been thinking about what it actually means to me to be a librarian or not. nb. some of these may be tongue-in-cheek ways in which i am a librarian: i work in a library i help people to access and organise information i have a cat i like gin ways in which i am not a librarian: i don’t have a librarianship qualification i don’t work with books 😉 i don’t knit (though i can probably remember how if pressed) i don’t shush people or wear my hair in a bun (i can confirm that this is also true of every librarian i know) ways in which i am a shambrarian: i like beer i have more it experience and qualification than librarianship at the end of the day, i still don’t know how i feel about this or, for that matter, how important it is. i’m probably going to accept whatever title people around me choose to bestow, though any label will chafe at times! lean libraries: applying agile practices to library services kanban board jeff lasovski (via wikimedia commons) i’ve been working with our it services at work quite closely for the last year as product owner for our new research data portal, orda. that’s been a fascinating process for me as i’ve been able to see first-hand some of the agile techniques that i’ve been reading about from time-to-time on the web over the last few years. they’re in the process of adopting a specific set of practices going under the name “scrum”, which is fun because it uses some novel terminology that sounds pretty weird to non-it folks, like “scrum master”, “sprint” and “product backlog”. on my small project we’ve had great success with the short cycle times and been able to build trust with our stakeholders by showing concrete progress on a regular basis. modern librarianship is increasingly fluid, particularly in research services, and i think that to handle that fluidity it’s absolutely vital that we are able to work in a more agile way. i’m excited about the possibilities of some of these ideas. however, scrum as implemented by our it services doesn’t seem something that transfers directly to the work that we do: it’s too specialised for software development to adapt directly. what i intend to try is to steal some of the individual practices on an experimental basis and simply see what works and what doesn’t. the lean concepts currently popular in it were originally developed in manufacturing: if they can be translated from the production of physical goods to it, i don’t see why we can’t make the ostensibly smaller step of translating them to a different type of knowledge work. i’ve therefore started reading around this subject to try and get as many ideas as possible. i’m generally pretty rubbish at taking notes from books, so i’m going to try and record and reflect on any insights i make on this blog. the framework for trying some of these out is clearly a plan-do-check-act continuous improvement cycle, so i’ll aim to reflect on that process too. i’m sure there will have been people implementing lean in libraries already, so i’m hoping to be able to discover and learn from them instead of starting froms scratch. wish me luck! mozilla global sprint photo by lena bell on unsplash every year, the mozilla foundation runs a two-day global sprint, giving people around the world hours to work on projects supporting and promoting open culture and tech. though much of the work during the sprint is, of course, technical software development work, there are always tasks suited to a wide range of different skill sets and experience levels. the participants include writers, designers, teachers, information professionals and many others. this year, for the first time, the university of sheffield hosted a site, providing a space for local researchers, developers and others to get out of their offices, work on #mozsprint and link up with others around the world. the sheffield site was organised by the research software engineering group in collaboration with the university library. our site was only small compared to others, but we still had people working on several different projects. my reason for taking part in the sprint was to contribute to the international effort on the library carpentry project. a team spread across four continents worked throughout the whole sprint to review and develop our lesson material. as there were no other library carpentry volunteers at the sheffield site, i chose to work on some urgent work around improving the presentation of our workshops and lessons on the web and related workflows. it was a really nice subproject to work on, requiring not only cleaning up and normalising the metadata we hold on workshops and lessons, but also digesting and formalising our current ad hoc process of lesson development. the largest group were solar physicists from the school of maths and statistics, working on the sunpy project, an open source environment for solar data analysis. they pushed loads of bug fixes and documentation improvements, and also mentored a new contributor through their first additions to the project. anna krystalli from research software engineering worked on the echoburst project, which is building a web browser extension to help people break out of their online echo chambers. it does this by using natural language processing techniques to highlight well-written, logically sound articles that disagree with the reader’s stated views on particular topics of interest. anna was part of an effort to begin extending this technology to online videos. we had a couple of individuals simply taking the opportunity to break out of their normal work environments to work or learn, including a couple of members of library staff show up for a couple of hours to learn how to use git on a new project! idcc reflection for most of the last few years i&# ;ve been lucky enough to attend the international digital curation conference (idcc). one of the main audiences attending is people who, like me, work on research data management at universities around the world and it&# ;s begun to feel like a sort of &# ;home&# ; conference to me. this year, idcc was held at the royal college of surgeons in the beautiful city of edinburgh. for the last couple of years, my overall impression has been that, as a community, we&# ;re moving away from the &# ;first-order&# ; problem of trying to convince people (from phd students to senior academics) to take rdm seriously and into a rich set of &# ;second-order&# ; problems around how to do things better and widen support to more people. this year has been no exception. here are a few of my observations and takeaway points. everyone has a repository now only last year, the most common question you&# ;d get asked by strangers in the coffee break would be &# ;do you have a data repository?&# ; now the question is more likely to be &# ;what are you using for your data repository?&# ;, along with more subtle questions about specific components of systems and how they interact. integrating active storage and archival systems now that more institutions have data worth preserving, there is more interest in (and in many cases experience of) setting up more seamless integrations between active and archival storage. there are lessons here we can learn. freezing in amber vs actively maintaining assets there seemed to be an interesting debate going on throughout the conference around the aim of preservation: should we be faithfully preserving the bits and bytes provided without trying to interpret them, or should we take a more active approach by, for example, migrating obsolete formats to newer alternatives. if the former, should we attempt to preserve the software required to access the data as well? if the latter, how much effort do we invest and how do we ensure nothing is lost or altered in the migration? demonstrating data science instead of debating what it is the phrase &# ;data science&# ; was once again one of the most commonly uttered of the conference. however, there is now less abstract discussion about what, exactly, is meant by this &# ;data science&# ; thing; this has been replaced more by concrete demonstrations. this change was exemplified perfectly by the keynote by data scientist alice daish, who spent a riveting minutes or so enthusing about all the cool stuff she does with data at the british museum. recognition of software as an issue even as recently as last year, i&# ;ve struggled to drum up much interest in discussing software sustainability and preservation at events like this; the interest was there, but there were higher priorities. so i was completely taken by surprise when we ended up with + people in the software preservation birds of a feather (bof) session, and when very little input was needed from me as chair to keep a productive discussion going for a full minutes. unashamed promotion of openness as a community we seem to have nearly overthrown our collective embarrassment about the phrase &# ;open data&# ; (although maybe this is just me). we&# ;ve always known it was a good thing, but i know i&# ;ve been a bit of an apologist in the past, feeling that i had to &# ;soften the blow&# ; when asking researchers to be more open. now i feel more confident in leading with the benefits of openness, and it felt like that&# ;s a change reflected in the community more widely. becoming more involved in the conference this year, i took a decision to try and do more to contribute to the conference itself, and i felt like this was pretty successful both in making that contribution and building up my own profile a bit. i presented a paper on one of my current passions, library carpentry; it felt really good to be able to share my enthusiasm. i presented a poster on our work integrating our data repository and digital preservation platform; this gave me more of a structure for networking during breaks, as i was able to stand by the poster and start discussions with anyone who seemed interested. i chaired a parallel session; a first for me, and a different challenge from presenting or simply attending the talks. and finally, i proposed and chaired the software preservation bof session (blog post forthcoming). renewed excitement it&# ;s weird, and possibly all in my imagination, but there seemed to be more energy at this conference than at the previous couple i&# ;ve been to. more people seemed to be excited about the work we&# ;re all doing, recent achievements and the possibilities for the future. introducing pyrefine: openrefine meets python i’m knocking the rust off my programming skills by attempting to write a pure-python interpreter for openrefine “scripts”. openrefine is a great tool for exploring and cleaning datasets prior to analysing them. it also records an undo history of all actions that you can export as a sort of script in json format. one thing that bugs me though is that, having spent some time interactively cleaning up your dataset, you then need to fire up openrefine again and do some interactive mouse-clicky stuff to apply that cleaning routine to another dataset. you can at least re-import the json undo history to make that as quick as possible, but there’s no getting around the fact that there’s no quick way to do it from a cold start. there is a project, batchrefine, that extends the openrefine server to accept batch requests over a http api, but that isn’t useful when you can’t or don’t want to keep a full java stack running in the background the whole time. my concept is this: you use or to explore the data interactively and design a cleaning process, but then export the process to json and integrate it into your analysis in python. that way it can be repeated ad nauseam without having to fire up a full java stack. i’m taking some inspiration from the great talk “so you want to be a wizard?" by julia evans (@b rk), who recommends trying experiments as a way to learn. she gives these rules of programming experiments: “it doesn’t have to be good it doesn’t have to work you have to learn something” in that spirit, my main priorities are: to see if this can be done; to see how far i can get implementing it; and to learn something. if it also turns out to be a useful thing, well, that’s a bonus. some of the interesting possible challenges here: implement all core operations; there are quite a lot of these, some of which will be fun (i.e. non-trivial) to implement implement (a subset of?) grel, the general refine expression language; i guess my undergrad course on implementing parsers and compilers will come in handy after all! generate clean, sane python code from the json rather than merely executing it; more than anything, this would be a nice educational tool for users of openrefine who want to see how to do equivalent things in python selectively optimise key parts of the process; this will involve profiling the code to identify bottlenecks as well as tweaking the actual code to go faster potentially handle contributions to the code from other people; i’d be really happy if this happened but i’m realistic… if you’re interested, the project is called pyrefine and it’s on github. constructive criticism, issues & pull requests all welcome! implementing yesterbox in emacs with mu e i’ve been meaning to give yesterbox a try for a while. the general idea is that each day you only deal with email that arrived yesterday or earlier. this forms your inbox for the day, hence “yesterbox”. once you’ve emptied your yesterbox, or at least got through some minimum number ( is recommended) then you can look at emails from today. even then you only really want to be dealing with things that are absolutely urgent. anything else can wait til tomorrow. the motivation for doing this is to get away from the feeling that we are king canute, trying to hold back the tide. i find that when i’m processing my inbox toward zero there’s always a temptation to keep skipping to the new stuff that’s just come in. hiding away the new email until i’ve dealt with the old is a very interesting idea. i use mu e in emacs for reading my email, and handily the mu search syntax is very flexible so you’d think it would be easy to create a yesterbox filter: maildir:"/inbox" date:.. d unfortunately, d is interpreted as “ hours ago from right now” so this filter misses everything that was sent yesterday but less than hours ago. there was a feature request raised on the mu github repository to implement an additional date filter syntax but it seems to have died a death for now. in the meantime, the answer to this is to remember that my workplace observes fairly standard office hours, so that anything sent more than hours ago is unlikely to have been sent today. the following does the trick: maildir:"/inbox" date:.. h in my mu e bookmarks list, that looks like this: (setq mu e-bookmarks &# ;((&# ;flag:unread and not flag:trashed&# ; &# ;unread messages&# ; ?u) (&# ;flag:flagged maildir:/archive&# ; &# ;starred messages&# ; ?s) (&# ;date:today..now&# ; &# ;today&# ;s messages&# ; ?t) (&# ;date: d..now&# ; &# ;last days&# ; ?w) (&# ;maildir:\&# ;/mailing lists.*\&# ; (flag:unread or flag:flagged)&# ; &# ;unread in mailing lists&# ; ?m) (&# ;maildir:\&# ;/inbox\&# ; date:.. d&# ; &# ;yesterbox&# ; ?y))) ;; <- this is the new one rewarding good practice in research from opensource.com on flickr whenever i’m involved in a discussion about how to encourage researchers to adopt new practices, eventually someone will come out with some variant of the following phrase: “that’s all very well, but researchers will never do xyz until it’s made a criterion in hiring and promotion decisions.” with all the discussion of carrots and sticks i can see where this attitude comes from, and strongly empathise with it, but it raises two main problems: it’s unfair and more than a little insulting to anyone to be lumped into one homogeneous group; and taking all the different possible xyzs into account, that’s an awful lot of hoops to expect anyone to jump through. firstly, “researchers” are as diverse as the rest of us in terms of what gets them out of bed in the morning. some of us want prestige; some want to contribute to a greater good; some want to create new things; some just enjoy the work. one thing i’d argue we all have in common is this: nothing is more offputting than feeling like you’re being strongarmed into something you don’t want to do. if we rely on simplistic metrics, people will focus on those and miss the point. at best people will disengage and at worst they will actively game the system. i’ve got to do these ten things to get my next payrise, and still retain my sanity? ok, what’s the least i can get away with and still tick them off. you see it with students taking poorly-designed assessments and grown-ups are no difference. we do need to wield carrots as well as sticks, but the whole point is that these practices are beneficial in and of themselves. the carrots are already there if we articulate them properly and clear the roadblocks (don’t you enjoy mixed metaphors?). creating artificial benefits will just dilute the value of the real ones. secondly, i’ve heard a similar argument made for all of the following practices and more: research data management open access publishing public engagement new media (e.g. blogging) software management and sharing some researchers devote every waking hour to their work, whether it’s in the lab, writing grant applications, attending conferences, authoring papers, teaching, and so on and so on. it’s hard to see how someone with all this in their schedule can find time to exercise any of these new skills, let alone learn them in the first place. and what about the people who sensibly restrict the hours taken by work to spend more time doing things they enjoy? yes, all of the above practices are valuable, both for the individual and the community, but they’re all new (to most) and hence require more effort up front to learn. we have to accept that it’s inevitably going to take time for all of them to become “business as usual”. i think if the hiring/promotion/tenure process has any role in this, it’s in asking whether the researcher can build a coherent narrative as to why they’ve chosen to focus their efforts in this area or that. you’re not on twitter but your data is being used by research groups across the world? great! you didn’t have time to tidy up your source code for github but your work is directly impacting government policy? brilliant! we still need convince more people to do more of these beneficial things, so how? call me naïve, but maybe we should stick to making rational arguments, calming fears and providing low-risk opportunities to learn new skills. acting (compassionately) like a stuck record can help. and maybe we’ll need to scale back our expectations in other areas (journal impact factors, anyone?) to make space for the new stuff. software carpentry: sc test; does your software do what you meant? “the single most important rule of testing is to do it.” — brian kernighan and rob pike, the practice of programming (quote taken from sc test page one of the trickiest aspects of developing software is making sure that it actually does what it’s supposed to. sometimes failures are obvious: you get completely unreasonable output or even (shock!) a comprehensible error message. but failures are often more subtle. would you notice if your result was out by a few percent, or consistently ignored the first row of your input data? the solution to this is testing: take some simple example input with a known output, run the code and compare the actual output with the expected one. implement a new feature, test and repeat. sounds easy, doesn’t it? but then you implement a new bit of code. you test it and everything seems to work fine, except that your new feature required changes to existing code and those changes broke something else. so in fact you need to test everything, and do it every time you make a change. further than that, you probably want to test that all your separate bits of code work together properly (integration testing) as well as testing the individual bits separately (unit testing). in fact, splitting your tests up like that is a good way of holding on to your sanity. this is actually a lot less scary than it sounds, because there are plenty of tools now to automate that testing: you just type a simple test command and everything is verified. there are even tools that enable you to have tests run automatically when you check the code into version control, and even automatically deploy code that passes the tests, a process known as continuous integration or ci. the big problems with testing are that it’s tedious, your code seems to work without it and no-one tells you off for not doing it. at the time when the software carpentry competition was being run, the idea of testing wasn’t new, but the tools to help were in their infancy. “existing tools are obscure, hard to use, expensive, don’t actually provide much help, or all three.” the sc test category asked entrants “to design a tool, or set of tools, which will help programmers construct and maintain black box and glass box tests of software components at all levels, including functions, modules, and classes, and whole programs.” the sc test category is interesting in that the competition administrators clearly found it difficult to specify what they wanted to see in an entry. in fact, the whole category was reopened with a refined set of rules and expectations. ultimately, it’s difficult to tell whether this category made a significant difference. where the tools to write tests used to be very sparse and difficult to use they are now many and several options exist for most programming languages. with this proliferation, several tried-and-tested methodologies have emerged which are consistent across many different tools, so while things still aren’t perfect they are much better. in recent years there has been a culture shift in the wider software development community towards both testing in general and test-first development, where the tests for a new feature are written first, and then the implementation is coded incrementally until all tests pass. the current challenge is to transfer this culture shift to the academic research community! tools for collaborative markdown editing photo by alan cleaver i really love markdown . i love its simplicity; its readability; its plain-text nature. i love that it can be written and read with nothing more complicated than a text-editor. i love how nicely it plays with version control systems. i love how easy it is to convert to different formats with pandoc and how it’s become effectively the native text format for a wide range of blogging platforms. one frustration i’ve had recently, then, is that it’s surprisingly difficult to collaborate on a markdown document. there are various solutions that almost work but at best feel somehow inelegant, especially when compared with rock solid products like google docs. finally, though, we’re starting to see some real possibilities. here are some of the things i’ve tried, but i’d be keen to hear about other options. . just suck it up to be honest, google docs isn’t that bad. in fact it works really well, and has almost no learning curve for anyone who’s ever used word (i.e. practically anyone who’s used a computer since the s). when i’m working with non-technical colleagues there’s nothing i’d rather use. it still feels a bit uncomfortable though, especially the vendor lock-in. you can export a google doc to word, odt or pdf, but you need to use google docs to do that. plus as soon as i start working in a word processor i get tempted to muck around with formatting. . git(hub) the obvious solution to most techies is to set up a github repo, commit the document and go from there. this works very well for bigger documents written over a longer time, but seems a bit heavyweight for a simple one-page proposal, especially over short timescales. who wants to muck around with pull requests and merging changes for a document that’s going to take days to write tops? this type of project doesn’t need a bug tracker or a wiki or a public homepage anyway. even without github in the equation, using git for such a trivial use case seems clunky. . markdown in etherpad/google docs etherpad is great tool for collaborative editing, but suffers from two key problems: no syntax highlighting or preview for markdown (it’s just treated as simple text); and you need to find a server to host it or do it yourself. however, there’s nothing to stop you editing markdown with it. you can do the same thing in google docs, in fact, and i have. editing a fundamentally plain-text format in a word processor just feels weird though. . overleaf/authorea overleaf and authorea are two products developed to support academic editing. authorea has built-in markdown support but lacks proper simultaneous editing. overleaf has great simultaneous editing but only supports markdown by wrapping a bunch of latex boilerplate around it. both ok but unsatisfactory. . stackedit now we’re starting to get somewhere. stackedit has both markdown syntax highlighting and near-realtime preview, as well as integrating with google drive and dropbox for file synchronisation. . hackmd hackmd is one that i only came across recently, but it looks like it does exactly what i’m after: a simple markdown-aware editor with live preview that also permits simultaneous editing. i’m a little circumspect simply because i know simultaneous editing is difficult to get right, but it certainly shows promise. . classeur i discovered classeur literally today: it’s developed by the same team as stackedit (which is now apparently no longer in development), and is currently in beta, but it looks to offer two killer features: real-time collaboration, including commenting, and pandoc-powered export to loads of different formats. anything else? those are the options i’ve come up with so far, but they can’t be the only ones. is there anything i’ve missed? other plain-text formats are available. i’m also a big fan of org-mode. &#x a ;þ e; software carpentry: sc track; hunt those bugs! this competition will be an opportunity for the next wave of developers to show their skills to the world — and to companies like ours. — dick hardt, activestate (quote taken from sc track page) all code contains bugs, and all projects have features that users would like but which aren’t yet implemented. open source projects tend to get more of these as their user communities grow and start requesting improvements to the product. as your open source project grows, it becomes harder and harder to keep track of and prioritise all of these potential chunks of work. what do you do? the answer, as ever, is to make a to-do list. different projects have used different solutions, including mailing lists, forums and wikis, but fairly quickly a whole separate class of software evolved: the bug tracker, which includes such well-known examples as bugzilla, redmine and the mighty jira. bug trackers are built entirely around such requests for improvement, and typically track them through workflow stages (planning, in progress, fixed, etc.) with scope for the community to discuss and add various bits of metadata. in this way, it becomes easier both to prioritise problems against each other and to use the hive mind to find solutions. unfortunately most bug trackers are big, complicated beasts, more suited to large projects with dozens of developers and hundreds or thousands of users. clearly a project of this size is more difficult to manage and requires a certain feature set, but the result is that the average bug tracker is non-trivial to set up for a small single-developer project. the sc track category asked entrants to propose a better bug tracking system. in particular, the judges were looking for something easy to set up and configure without compromising on functionality. the winning entry was a bug-tracker called roundup, proposed by ka-ping yee. here we have another tool which is still in active use and development today. given that there is now a huge range of options available in this area, including the mighty github, this is no small achievement. these days, of course, github has become something of a de facto standard for open source project management. although ostensibly a version control hosting platform, each github repository also comes with a built-in issue tracker, which is also well-integrated with the “pull request” workflow system that allows contributors to submit bug fixes and features themselves. github’s competitors, such as gitlab and bitbucket, also include similar features. not everyone wants to work in this way though, so it’s good to see that there is still a healthy ecosystem of open source bug trackers, and that software carpentry is still having an impact. software carpentry: sc config; write once, compile anywhere nine years ago, when i first release python to the world, i distributed it with a makefile for bsd unix. the most frequent questions and suggestions i received in response to these early distributions were about building it on different unix platforms. someone pointed me to autoconf, which allowed me to create a configure script that figured out platform idiosyncracies unfortunately, autoconf is painful to use – its grouping, quoting and commenting conventions don’t match those of the target language, which makes scripts hard to write and even harder to debug. i hope that this competition comes up with a better solution — it would make porting python to new platforms a lot easier! — guido van rossum, technical director, python consortium (quote taken from sc config page) on to the next software carpentry competition category, then. one of the challenges of writing open source software is that you have to make it run on a wide range of systems over which you have no control. you don’t know what operating system any given user might be using or what libraries they have installed, or even what versions of those libraries. this means that whatever build system you use, you can’t just send the makefile (or whatever) to someone else and expect everything to go off without a hitch. for a very long time, it’s been common practice for source packages to include a configure script that, when executed, runs a bunch of tests to see what it has to work with and sets up the makefile accordingly. writing these scripts by hand is a nightmare, so tools like autoconf and automake evolved to make things a little easier. they did, and if the tests you want to use are already implemented they work very well indeed. unfortunately they’re built on an unholy combination of shell scripting and the archaic gnu m macro language. that means if you want to write new tests you need to understand both of these as well as the architecture of the tools themselves — not an easy task for the average self-taught research programmer. sc conf, then, called for a re-engineering of the autoconf concept, to make it easier for researchers to make their code available in a portable, platform-independent format. the second round configuration tool winner was sapcat, “a tool to help make software portable”. unfortunately, this one seems not to have gone anywhere, and i could only find the original proposal on the internet archive. there were a lot of good ideas in this category about making catalogues and databases of system quirks to avoid having to rerun the same expensive tests again the way a standard ./configure script does. i think one reason none of these ideas survived is that they were overly ambitions, imagining a grand architecture where their tool provide some overarching source of truth. this is in stark contrast to the way most unix-like systems work, where each tool does one very specific job well and tools are easy to combine in various ways. in the end though, i think moore’s law won out here, making it easier to do the brute-force checks each time than to try anything clever to save time — a good example of avoiding unnecessary optimisation. add to that the evolution of the generic pkg-config tool from earlier package-specific tools like gtk-config, and it’s now much easier to check for particular versions and features of common packages. on top of that, much of the day-to-day coding of a modern researcher happens in interpreted languages like python and r, which give you a fully-functioning pre-configured environment with a lot less compiling to do. as a side note, tom tromey, another of the shortlisted entrants in this category, is still a major contributor to the open source world. he still seems to be involved in the automake project, contributes a lot of code to the emacs community too and blogs sporadically at the cliffs of inanity. semantic linefeeds: one clause per line i’ve started using “semantic linefeeds”, a concept i discovered on brandon rhodes' blog, when writing content, an idea described in that article far better than i could. i turns out this is a very old idea, promoted way back in the day by brian w kernighan, contributor to the original unix system, co-creator of the awk and ampl programming languages and co-author of a lot of seminal programming textbooks including “the c programming language”. the basic idea is that you break lines at natural gaps between clauses and phrases, rather than simply after the last word before you hit characters. keeping line lengths strictly to characters isn’t really necessary in these days of wide aspect ratios for screens. breaking lines at points that make semantic sense in the sentence is really helpful for editing, especially in the context of version control, because it isolates changes to the clause in which they occur rather than just the nearest -character block. i also like it because it makes my crappy prose feel just a little bit more like poetry. ☺ software carpentry: sc build; or making a better make software tools often grow incrementally from small beginnings into elaborate artefacts. each increment makes sense, but the final edifice is a mess. make is an excellent example: a simple tool that has grown into a complex domain-specific programming language. i look forward to seeing the improvements we will get from designing the tool afresh, as a whole… — simon peyton-jones, microsoft research (quote taken from sc build page) most people who have had to compile an existing software tool will have come across the venerable make tool (which usually these days means gnu make). it allows the developer to write a declarative set of rules specifying how the final software should be built from its component parts, mostly source code, allowing the build itself to be carried out by simply typing make at the command line and hitting enter. given a set of rules, make will work out all the dependencies between components and ensure everything is built in the right order and nothing that is up-to-date is rebuilt. great in principle but make is notoriously difficult for beginners to learn, as much of the logic for how builds are actually carried out is hidden beneath the surface. this also makes it difficult to debug problems when building large projects. for these reasons, the sc build category called for a replacement build tool engineered from the ground up to solve these problems. the second round winner, sccons, is a python-based make-like build tool written by steven knight. while i could find no evidence of any of the other shortlisted entries, this project (now renamed scons) continues in active use and development to this day. i actually use this one myself from time to time and to be honest i prefer it in many cases to trendy new tools like rake or grunt and the behemoth that is apache ant. its python-based sconstruct file syntax is remarkably intuitive and scales nicely from very simple builds up to big and complicated project, with good dependency tracking to avoid unnecessary recompiling. it has a lot of built-in rules for performing common build & compile tasks, but it’s trivial to add your own, either by combining existing building blocks or by writing a new builder with the full power of python. a minimal sconstruct file looks like this: program(&# ;hello.c&# ;) couldn’t be simpler! and you have the full power of python syntax to keep your build file simple and readable. it’s interesting that all the entries in this category apart from one chose to use a python-derived syntax for describing build steps. python was clearly already a language of choice for flexible multi-purpose computing. the exception is the entry that chose to use xml instead, which i think is a horrible idea (oh how i used to love xml!) but has been used to great effect in the java world by tools like ant and maven. what happened to the original software carpentry? “software carpentry was originally a competition to design new software tools, not a training course. the fact that you didn’t know that tells you how well it worked.” when i read this in a recent post on greg wilson’s blog, i took it as a challenge. i actually do remember the competition, although looking at the dates it was long over by the time i found it. i believe it did have impact; in fact, i still occasionally use one of the tools it produced, so greg’s comment got me thinking: what happened to the other competition entries? working out what happened will need a bit of digging, as most of the relevant information is now only available on the internet archive. it certainly seems that by november the domain name had been allowed to lapse and had been replaced with a holding page by the registrar. there were four categories in the competition, each representing a category of tool that the organisers thought could be improved: sc build: a build tool to replace make sc conf: a configuration management tool to replace autoconf and automake sc track: a bug tracking tool sc test: an easy to use testing framework i’m hoping to be able to show that this work had a lot more impact than greg is admitting here. i’ll keep you posted on what i find! changing static site generators: nanoc → hugo i’ve decided to move the site over to a different static site generator, hugo. i’ve been using nanoc for a long time and it’s worked very well, but lately it’s been taking longer and longer to compile the site and throwing weird errors that i can’t get to the bottom of. at the time i started using nanoc, static site generators were in their infancy. there weren’t the huge number of feature-loaded options that there are now, so i chose one and i built a whole load of blogging-related functionality myself. i did it in ways that made sense at the time but no longer work well with nanoc’s latest versions. so it’s time to move to something that has blogging baked-in from the beginning and i’m taking the opportunity to overhaul the look and feel too. again, when i started there weren’t many pre-existing themes so i built the whole thing myself and though i’m happy with the work i did on it it never quite felt polished enough. now i’ve got the opportunity to adapt one of the many well-designed themes already out there, so i’ve taken one from the hugo themes gallery and tweaked the colours to my satisfaction. hugo also has various features that i’ve wanted to implement in nanoc but never quite got round to it. the nicest one is proper handling of draft posts and future dates, but i keep finding others. there’s a lot of old content that isn’t quite compatible with the way hugo does things so i’ve taken the old nanoc-compiled content and frozen it to make sure that old links should still work. i could probably fiddle with it for years without doing much so it’s probably time to go ahead and publish it. i’m still not completely happy with my choice of theme but one of the joys of hugo is that i can change that whenever i want. let me know what you think! license except where otherwise stated, all content on erambler by jez cope is licensed under a creative commons attribution-sharealike . international license. rdm resources i occasionally get asked for resources to help someone learn more about research data management (rdm) as a discipline (i.e. for those providing rdm support rather than simply wanting to manage their own data). i’ve therefore collected a few resources together on this page. if you’re lucky i might even update it from time to time! first, a caveat: this is very focussed on uk higher education, though much of it will still be relevant for people outside that narrow demographic. my general recommendation would be to start with the digital curation centre (dcc) website and follow links out from there. i also have a slowly growing list of rdm links on diigo, and there’s an rdm section in my list of blogs and feeds too. mailing lists jiscmail is a popular list server run for the benefit of further and higher education in the uk; the following lists are particularly relevant: research-dataman data-publication digital-preservation lis-researchsupport the research data alliance have a number of interest groups and working groups that discuss issues by email events international digital curation conference — major annual conference research data management forum — roughly every six months, places are limited! rda plenary — also every months, but only about in every in europe books in no particular order: martin, victoria. demystifying eresearch: a primer for librarians. libraries unlimited, . borgman, christine l. big data, little data, no data: scholarship in the networked world. cambridge, massachusetts: the mit press, . corti, louise, veerle van den eynden, and libby bishop. managing and sharing research data. thousand oaks, ca: sage publications ltd, . pryor, graham, ed. managing research data. facet publishing, . pryor, graham, sarah jones, and angus whyte, eds. delivering research data management services: fundamentals of good practice. facet publishing, . ray, joyce m., ed. research data management: practical strategies for information professionals. west lafayette, indiana: purdue university press, . reports ‘ten recommendations for libraries to get started with research data management’. liber, august . http://libereurope.eu/news/ten-recommendations-for-libraries-to-get-started-with-research-data-management/. ‘science as an open enterprise’. royal society, june . https://royalsociety.org/policy/projects/science-public-enterprise/report/. mary auckland. ‘re-skilling for research’. rluk, january . http://www.rluk.ac.uk/wp-content/uploads/ / /rluk-re-skilling.pdf. journals international journal of digital curation (ijdc) journal of escience librarianship (jeslib) fairphone : initial thoughts on the original ethical smartphone i’ve had my eye on the fairphone for a while now, and when my current phone, an aging samsung galaxy s , started playing up i decided it was time to take the plunge. a few people have asked for my thoughts on the fairphone so here are a few notes. why i bought it the thing that sparked my interest, and the main reason for buying the phone really, was the ethical stance of the manufacturer. the small swedish company have gone to great lengths to ensure that both labour and materials are sourced as responsibly as possible. they regularly inspect the factories where the parts are made and assembled to ensure fair treatment of the workers and they source all the raw materials carefully to minimise the environmental impact and the use of conflict minerals. another side to this ethical stance is a focus on longevity of the phone itself. this is not a product with an intentionally limited lifespan. instead, it’s designed to be modular and as repairable as possible, by the owner themselves. spares are available for all of the parts that commonly fail in phones (including screen and camera), and at the time of writing the fairphone is the only phone to receive / for reparability from ifixit. there are plans to allow hardware upgrades, including an expansion port on the back so that nfc or wireless charging could be added with a new case, for example. what i like so far, the killer feature for me is the dual sim card slots. i have both a personal and a work phone, and the latter was always getting left at home or in the office or running out of charge. now i have both sims in the one phone: i can recieve calls on either number, turn them on and off independently and choose which account to use when sending a text or making a call. the os is very close to “standard” android, which is nice, and i really don’t miss all the extra bloatware that came with the galaxy s . it also has twice the storage of that phone, which is hardly unique but is still nice to have. overall, it seems like a solid, reliable phone, though it’s not going to outperform anything else at the same price point. it certainly feels nice and snappy for everything i want to use it for. i’m no mobile gamer, but there is that distant promise of upgradability on the horizon if you are. what i don’t like i only have two bugbears so far. once or twice it’s locked up and become unresponsive, requiring a “manual reset” (removing and replacing the battery) to get going again. it also lacks nfc, which isn’t really a deal breaker, but i was just starting to make occasional use of it on the s (mostly experimenting with my yubikey neo) and it would have been nice to try out android pay when it finally arrives in the uk. overall it’s definitely a serious contender if you’re looking for a new smartphone and aren’t bothered about serious mobile gaming. you do pay a premium for the ethical sourcing and modularity, but i feel that’s worth it for me. i’m looking forward to seeing how it works out as a phone. wiring my web i’m a nut for automating repetitive tasks, so i was dead pleased a few years ago when i discovered that ifttt let me plug different bits of the web together. i now use it for tasks such as: syndicating blog posts to social media creating scheduled/repeating todo items from a google calendar making a note to revisit an article i’ve starred in feedly i’d probably only be half-joking if i said that i spend more time automating things than i save not having to do said things manually. thankfully it’s also a great opportunity to learn, and recently i’ve been thinking about reimplementing some of my ifttt workflows myself to get to grips with how it all works. there are some interesting open source projects designed to offer a lot of this functionality, such as huginn, but i decided to go for a simpler option for two reasons: i want to spend my time learning about the apis of the services i use and how to wire them together, rather than learning how to use another big framework; and i only have a small amazon ec server to pay with and a heavy ruby on rails app like huginn (plus web server) needs more memory than i have. instead i’ve gone old-school with a little collection of individual scripts to do particular jobs. i’m using the built-in scheduling functionality of systemd, which is already part of a modern linux operating system, to get them to run periodically. it also means i can vary the language i use to write each one depending on the needs of the job at hand and what i want to learn/feel like at the time. currently it’s all done in python, but i want to have a go at lisp sometime, and there are some interesting new languages like go and julia that i’d like to get my teeth into as well. you can see my code on github as it develops: https://github.com/jezcope/web-plumbing. comments and contributions are welcome (if not expected) and let me know if you find any of the code useful. image credit: xkcd # , automation data is like water, and language is like clothing i admit it: i’m a grammar nerd. i know the difference between ‘who’ and ‘whom’, and i’m proud. i used to be pretty militant, but these days i’m more relaxed. i still take joy in the mechanics of the language, but i also believe that english is defined by its usage, not by a set of arbitrary rules. i’m just as happy to abuse it as to use it, although i still think it’s important to know what rules you’re breaking and why. my approach now boils down to this: language is like clothing. you (probably) wouldn’t show up to a job interview in your pyjamas , but neither are you going to wear a tuxedo or ballgown to the pub. getting commas and semicolons in the right place is like getting your shirt buttons done up right. getting it wrong doesn’t mean you’re an idiot. everyone will know what you meant. it will affect how you’re perceived, though, and that will affect how your message is perceived. and there are former rules that some still enforce that are nonetheless dropping out of regular usage. there was a time when everyone in an office job wore formal clothing. then it became acceptable just to have a blouse, or a shirt and tie. then the tie became optional and now there are many professions where perfectly well-respected and competent people are expected to show up wearing nothing smarter than jeans and a t-shirt. one such rule imho is that ‘data’ is a plural and should take pronouns like ‘they’ and ‘these’. the origin of the word ‘data’ is in the latin plural of ‘datum’, and that idea has clung on for a considerable period. but we don’t speak latin and the english language continues to evolve: ‘agenda’ also began life as a latin plural, but we don’t use the word ‘agendum’ any more. it’s common everyday usage to refer to data with singular pronouns like ‘it’ and ‘this’, and it’s very rare to see someone referring to a single datum (as opposed to ‘data point’ or something). if you want to get technical, i tend to think of data as a mass noun, like ‘water’ or ‘information’. it’s uncountable: talking about ‘a water’ or ‘an information’ doesn’t make much sense, but it uses singular pronouns, as in ‘this information’. if you’re interested, the oxford english dictionary also takes this position, while chambers leaves the choice of singular or plural noun up to you. there is absolutely nothing wrong, in my book, with referring to data in the plural as many people still do. but it’s no longer a rule and for me it’s weakened further from guideline to preference. it’s like wearing a bow-tie to work. there’s nothing wrong with it and some people really make it work, but it’s increasingly outdated and even a little eccentric. or maybe you’d totally rock it. &#x a ;þ e; like not starting a sentence with a conjunction… &#x a ;þ e; #idcc day : new ideas well, i did a great job of blogging the conference for a couple of days, but then i was hit by the bug that’s been going round and didn’t have a lot of energy for anything other than paying attention and making notes during the day! i’ve now got round to reviewing my notes so here are a few reflections on day . day was the day of many parallel talks! so many great and inspiring ideas to take in! here are a few of my take-home points. big science and the long tail the first parallel session had examples of practical data management in the real world. jian qin & brian dobreski (school of information studies, syracuse university) worked on reproducibility with one of the research groups involved with the recent gravitational wave discovery. “reproducibility” for this work (as with much of physics) mostly equates to computational reproducibility: tracking the provenance of the code and its input and output is key. they also found that in practice the scientists' focus was on making the big discovery, and ensuring reproducibility was seen as secondary. this goes some way to explaining why current workflows and tools don’t really capture enough metadata. milena golshan & ashley sands (center for knowledge infrastructures, ucla) investigated the use of software-as-a-service (saas, such as google drive, dropbox or more specialised tools) as a way of meeting the needs of long-tail science research such as ocean science. this research is characterised by small teams, diverse data, dynamic local development of tools, local practices and difficulty disseminating data. this results in a need for researchers to be generalists, as opposed to “big science” research areas, where they can afford to specialise much more deeply. such generalists tend to develop their own isolated workflows, which can differ greatly even within a single lab. long-tail research also often struggles from a lack of dedicated it support. they found that use of saas could help to meet these challenges, but with a high cost required to cover the needed guarantees of security and stability. education & training this session focussed on the professional development of library staff. eleanor mattern (university of pittsburgh) described the immersive training introduced to improve librarians' understanding of the data needs of their subject areas in delivering their rdm service delivery model. the participants each conducted a “disciplinary deep dive”, shadowing researchers and then reporting back to the group on their discoveries with a presentation and discussion. liz lyon (also university of pittsburgh, formerly ukoln/dcc) gave a systematic breakdown of the skills, knowledge and experience required in different data-related roles, obtained from an analysis of job adverts. she identified distinct roles of data analyst, data engineer and data journalist, and as well as each role’s distinctive skills, pinpointed common requirements of all three: python, r, sql and excel. this work follows on from an earlier phase which identified an allied set of roles: data archivist, data librarian and data steward. data sharing and reuse this session gave an overview of several specific workflow tools designed for researchers. marisa strong (university of california curation centre/california digital libraries) presented dash, a highly modular tool for manual data curation and deposit by researchers. it’s built on their flexible backend, stash, and though it’s currently optimised to deposit in their merritt data repository it could easily be hooked up to other repositories. it captures datacite metadata and a few other fields, and is integrated with orcid to uniquely identify people. in a different vein, eleni castro (institute for quantitative social science, harvard university) discussed some of the ways that harvard’s dataverse repository is streamlining deposit by enabling automation. it provides a number of standardised endpoints such as oai-pmh for metadata harvest and sword for deposit, as well as custom apis for discovery and deposit. interesting use cases include: an addon for the open science framework to deposit in dataverse via sword an r package to enable automatic deposit of simulation and analysis results integration with publisher workflows open journal systems a growing set of visualisations for deposited data in the future they’re also looking to integrate with dmptool to capture data management plans and with archivematica for digital preservation. andrew treloar (australian national data service) gave us some reflections on the ands “applications programme”, a series of small funded projects intended to address the fourth of their strategic transformations, single use → reusable. he observed that essentially these projects worked because they were able to throw money at a problem until they found a solution: not very sustainable. some of them stuck to a traditional “waterfall” approach to project management, resulting in “the right solution years late”. every researcher’s needs are “special” and communities are still constrained by old ways of working. the conclusions from this programme were that: “good enough” is fine most of the time adopt/adapt/augment is better than build existing toolkits let you focus on the % functionality that’s missing succussful projects involved research champions who can: ) articulate their community’s requirements; and ) promote project outcomes summary all in all, it was a really exciting conference, and i’ve come home with loads of new ideas and plans to develop our services at sheffield. i noticed a continuation of some of the trends i spotted at last year’s idcc, especially an increasing focus on “second-order” problems: we’re no longer spending most of our energy just convincing researchers to take data management seriously and are able to spend more time helping them to do it better and get value out of it. there’s also a shift in emphasis (identified by closing speaker cliff lynch) from sharing to reuse, and making sure that data is not just available but valuable. #idcc day : open data the main conference opened today with an inspiring keynote by barend mons, professor in biosemantics, leiden university medical center. the talk had plenty of great stuff, but two points stood out for me. first, prof mons described a newly discovered link between huntingdon’s disease and a previously unconsidered gene. no-one had previously recognised this link, but on mining the literature, an indirect link was identified in more than % of the roughly million scientific claims analysed. this is knowledge for which we already had more than enough evidence, but which could never have been discovered without such a wide-ranging computational study. second, he described a number of behaviours which should be considered “malpractice” in science: relying on supplementary data in articles for data sharing: the majority of this is trash (paywalled, embedded in bitmap images, missing) using the journal impact factor to evaluate science and ignoring altmetrics not writing data stewardship plans for projects (he prefers this term to “data management plan”) obstructing tenure for data experts by assuming that all highly-skilled scientists must have a long publication record a second plenary talk from andrew sallons of the centre for open science introduced a number of interesting-looking bits and bobs, including the transparency & openness promotion (top) guidelines which set out a pathway to help funders, publishers and institutions move towards more open science. the rest of the day was taken up with a panel on open data, a poster session, some demos and a birds-of-a-feather session on sharing sensitive/confidential data. there was a great range of posters, but a few that stood out to me were: lessons learned about iso (“audit and certification of trustworthy digital repositories”) certification from the british library two separate posters (from the universities of toronto and colorado) about disciplinary rdm information & training for liaison librarians a template for sharing psychology data developed by a psychologist-turned-information researcher from carnegie mellon university more to follow, but for now it’s time for the conference dinner! #idcc day : business models for research data management i’m at the international digital curation conference (#idcc ) in amsterdam this week. it’s always a good opportunity to pick up some new ideas and catch up with colleagues from around the world, and i always come back full of new possibilities. i’ll try and do some more reflective posts after the conference but i thought i’d do some quick reactions while everything is still fresh. monday and thursday are pre- and post-conference workshop days, and today i attended developing research data management services. joy davidson and jonathan rans from the digital curation centre (dcc) introduced us to the business model canvas, a template for designing a business model on a single sheet of paper. the model prompts you to think about all of the key facets of a sustainable, profitable business, and can easily be adapted to the task of building a service model within a larger institution. the dcc used it as part of the collaboration to clarify curation costs ( c) project, whose output the curation costs exchange is also worth a look. it was a really useful exercise to be able to work through the whole process for an aspect of research data management (my table focused on training & guidance provision), both because of the ideas that came up and also the experience of putting the framework into practice. it seems like a really valuable tool and i look forward to seeing how it might help us with our rdm service development. tomorrow the conference proper begins, with a range of keynotes, panel sessions and birds-of-a-feather meetings so hopefully more then! about me i help researchers communicate and collaborate more effectively using technology, mainly focusing on research data management policy, practice, training and advocacy. i currently work at the british library as data services lead. in my free time, i like to: run; play the accordion; morris dance; climb; cook; read (fiction and non-fiction, mostly scifi & fantasy); write. tbh i barely have time for any of them… better science through better data #scidata better science through better doughnutsjez cope update: fixed the link to the slides so it works now! last week i had the honour of giving my first ever keynote talk, at an event entitled better science through better data hosted jointly by springer nature and the wellcome trust. it was nerve-wracking but exciting and seemed to go down fairly well. i even got accidentally awarded a phd in the programme — if only it was that easy! the slides for the talk, “supporting open research: the role of an academic library”, are available online (doi: . /shef.data. ), and the whole event was video’d for posterity and viewable online. i got some good questions too, mainly from the clever online question system. i didn’t get to answer all of them, so i’m thinking of doing a blog post or two to address a few more. there were loads of other great presentations as well, both keynotes and -minute lightning talks, so i’d encourage you to take a look at at least some of it. i’ll pick out a few of my highlights. dr aled edwards (university of toronto) there’s a major problem with science funding that i hadn’t really thought about before. the available funding pool for research is divided up into pots by country, and often by funding body within a country. each of these pots have robust processes to award funding to the most important problems and most capable researchers. the problem comes because there is no coordination between these pots, so researchers all over the world end up getting funded to research the most popular problems leading to a lot of duplication of effort. industry funding suffers from a similar problem, particularly the pharmaceutical industry. because there is no sharing of data or negative results, multiple companies spend billions researching the same dead ends chasing after the same drugs. this is where the astronomical costs of drug development come from. dr edwards presented one alternative, modelled by a company called m k pharma. the idea is to use existing ip laws to try and give academic researchers a reasonable, morally-justifiable and sustainable profit on drugs they develop, in contrast to the current model where basic research is funded by governments while large corporations hoover up as much profit as they possibly can. this new model would develop drugs all the way to human trial within academia, then license the resulting drugs to companies to manufacture with a price cap to keep the medicines affordable to all who need them. core to this effort is openness with data, materials and methodology, and dr edwards presented several examples of how this approach benefited academic researchers, industry and patients compared with a closed, competitive focus. dr kirstie whitaker (alan turing institute) this was a brilliant presentation, presenting a practical how-to guide to doing reproducible research, from one researcher to another. i suggest you take a look at her slides yourself: showing your working: a how-to guide to reproducible research. dr whitaker briefly addressed a number of common barriers to reproducible research: is not considered for promotion: so it should be! held to higher standards than others: reviewers should be discouraged from nitpicking just because the data/code/whatever is available (true unbiased peer review of these would be great though) publication bias towards novel findings: it is morally wrong to not publish reproductions, replications etc. so we need to address the common taboo on doing so plead the th: if you share, people may find flaws, but if you don’t they can’t — if you’re worried about this you should ask yourself why! support additional users: some (much?) of the burden should reasonably on the reuser, not the sharer takes time: this is only true if you hack it together after the fact; if you do it from the start, the whole process will be quicker! requires additional skills: important to provide training, but also to judge phd students on their ability to do this, not just on their thesis & papers the rest of the presentation, the “how-to” guide of the title' was a well-chosen and passionately delivered set of recommendations, but the thing that really stuck out for me is how good dr whitaker is at making the point that you only have to do one of these things to improve the quality of your research. it’s easy to get the impression at the moment that you have to be fully, perfectly open or not at all, but it’s actually ok to get there one step at a time, or even not to go all the way at all! anyway, i think this is a slide deck that speaks for itself, so i won’t say any more! lightning talk highlights there was plenty of good stuff in the lightning talks, which were constrained to minutes each, but a few of the things that stood out for me were, in no particular order: code ocean — share and run code in the cloud dat project — peer to peer data syncronisation tool can automate metadata creation, data syncing, versioning set up a secure data sharing network that keeps the data in sync but off the cloud berlin institute of health — open science course for students pre-print paper course materials intermine — taking the pain out of data cleaning & analysis nix/nixos as a component of a reproducible paper bonej (imagej plugin for bone analysis) — developed by a scientist, used a lot, now has a wellcome-funded rse to develop next version esasky — amazing live, online archive of masses of astronomical data coda i really enjoyed the event (and the food was excellent too). my thanks go out to: the programme committee for asking me to come and give my take — i hope i did it justice! the organising team who did a brilliant job of keeping everything running smoothly before and during the event the university of sheffield for letting me get away with doing things like this! blog platform switch i’ve just switched my blog over to the nikola static site generator. hopefully you won’t notice a thing, but there might be a few weird spectres around til i get all the kinks ironed out. i’ve made the switch for a couple of main reasons: nikola supports jupyter notebooks as a source format for blog posts, which will be useful to include code snippets it’s written in python, a language which i actually know, so i’m more likely to be able to fix things that break, customise it and potentially contribute to the open source project (by contrast, hugo is written in go, which i’m not really familiar with) chat rooms vs twitter: how i communicate now cc , pixabay this time last year, brad colbow published a comic in his “the brads” series entitled “the long slow death of twitter”. it really encapsulates the way i’ve been feeling about twitter for a while now. go ahead and take a look. i’ll still be here when you come back. according to my twitter profile, i joined in february as user # , , . it was nearing its rd birthday and, though there were clearly a lot of people already signed up at that point, it was still relatively quiet, especially in the uk. i was a lonely phd student just starting to get interested in educational technology, and one thing that twitter had in great supply was (and still is) people pushing back the boundaries of what tech can do in different contexts. somewhere along the way twitter got really noisy, partly because more people (especially commercial companies) are using it more to talk about stuff that doesn’t interest me, and partly because i now follow , + people and find i get several tweets a second at peak times, which no-one could be expected to handle. more recently i’ve found my attention drawn to more focussed communities instead of that big old shouting match. i find i’m much more comfortable discussing things and asking questions in small focussed communities because i know who might be interested in what. if i come across an article about a cool new python library, i’ll geek out about it with my research software engineer friends; if i want advice on an aspect of my emacs setup, i’ll ask a bunch of emacs users. i feel like i’m talking to people who want to hear what i’m saying. next to that experience, twitter just feels like standing on a street corner shouting. irc channels (mostly on freenode), and similar things like slack and gitter form the bulk of this for me, along with a growing number of whatsapp group chats. although online chat is theoretically a synchronous medium, i find that i can treat it more as “semi-synchronous”: i can have real-time conversations as they arise, but i can also close them and tune back in later to catch up if i want. now i come to think about it, this is how i used to treat twitter before the , follows happened. i also find i visit a handful of forums regularly, mostly of the reddit link-sharing or stackexchange q&a type. /r/buildapc was invaluable when i was building my latest box, /r/earthporn (very much not nsfw) is just beautiful. i suppose the risk of all this is that i end up reinforcing my own echo chamber. i’m not sure how to deal with that, but i certainly can’t deal with it while also suffering from information overload. not just certifiable… a couple of months ago, i went to oxford for an intensive, -day course run by software carpentry and data carpentry for prospective new instructors. i’ve now had confirmation that i’ve completed the checkout procedure so it’s official: i’m now a certified data carpentry instructor! as far as i’m aware, the certification process is now combined, so i’m also approved to teach software carpentry material too. and of course there’s library carpentry too… ssi fellowship i’m honoured and excited to be named one of this year’s software sustainability institute fellows. there’s not much to write about yet because it’s only just started, but i’m looking forward to sharing more with you. in the meantime, you can take a look at the fellowship announcement and get an idea of my plans from my application video: talks here is a selection of talks that i’ve given. {{% template %}} <%! import arrow %> date title location % for talk in post.data("talks"): % if 'date' in talk: ${date.format('ddd d mmm yyyy')} % endif % if 'url' in talk: % endif ${talk['title']} % if 'url' in talk: % endif ${talk.get('location', '')} % endfor {{% /template %}} corra harris ( - ) | new georgia encyclopedia skip to main content for educators view nge content as it applies to the georgia standards of excellence. learn more search form search search options, a-z index x filter content by: all articles images audio video georgia's state art collection browse images topics arts & culture arts & culture overviews architecture & historic preservation folklife & customs food & foodways literature media music religion theater visual arts business & economy business & economy overviews tourism companies & industries agriculture labor transportation & aerospace philanthropy & nonprofit organizations business schools & publications counties, cities & neighborhoods cities & towns neighborhoods counties general county & city topics education colleges & universities education figures libraries, museums & archives general education topics geography & environment geography & environment overviews geographic regions geographic sites & features major river systems natural history conservation & management education & research government & politics government overviews constitutional history governors of georgia local government state government political figures political issues political parties, interest groups & movements military u.s. supreme court cases general government & politics topics history & archaeology history overviews archaeology & early history colonial era, - revolution & early republic, - antebellum era, - civil war & reconstruction, - late nineteenth century, - progressive era to wwii, - civil rights & modern georgia, since historians & organizations sites & museums this month in georgia history science & medicine science overviews biotechnology geological resources geology medicine museums & institutions paleontology physics & astronomy scientists water resources sports & outdoor recreation individual & team sports outdoor recreation & attractions sports venues & events people georgia studies spotlight architects artists athletes journalists military leaders musicians politicians religious figures scientists quick facts exhibitions destinations doing business in ga ga web resources digital library of georgia more resources . . . blog arts & culture literature corra harris ( - ) original entry by catherine oglesby, valdosta state university, / / last edited by nge staff on / / explore this article contents early life career novelist corra white harris corra harris was one of the most celebrated women from georgia for nearly three decades in the early twentieth century. she is best known for her first novel, a circuit rider's wife ( ), though she gained a national audience a decade before its publication. from through the s, she published hundreds of essays and short stories and more than a thousand book reviews in such magazines as the saturday evening post, harper's, good housekeeping, ladies home journal, and especially the independent, a highly reputable new york-based periodical known for its political, social, and literary critiques. harris established a reputation as a humorist, southern apologist, polemicist, and upholder of premodern agrarian values. at the same time she criticized southern writers who sentimentalized a past that never existed. most of harris's nineteen books were novels, though she also published two autobiographies, a travel journal, and a coauthored book of fictional letters. two of her works became feature-length movies. of these, the best known is i'd climb the highest mountain ( ), inspired by a circuit rider's wife. the film was written and produced by georgia native lamar trotti and starred susan hayward and william lundigan. she was the first female war correspondent to go abroad in world war i ( - ). early life born corra mae white on march , , on farmhill plantation in the foothills of elbert county, she was the daughter of tinsley rucker white and mary elizabeth mathews white. like many southern women of her day, she did not have an extensive education. she attended elberton female academy but never graduated and, as a writer, was largely self-taught. in she married methodist minister and educator lundy howard harris. they had three children, only one of whom—a daughter named faith—lived beyond infancy. harris's lundy harris career developed out of financial necessity. her husband's life in the methodist ministry and in ministerial education was punctuated by incapacities from bouts of alcoholism and depression. before and after lundy harris's death in , corra harris assumed responsibility for her immediate and extended family's financial survival. she remained a widow, spending the last two decades of her life at the place she named "in the valley" just outside cartersville in bartow county. there she died in , having outlived her daughter by sixteen years. career harris's prolific writing career began in with an impassioned letter to the editor of the independent. william hayes ward wrote a searing editorial about the lynching in georgia on april , , of sam hose, a black man accused of killing a white farmer and raping his wife. harris replied with a conventional defense of lynching, yet she so impressed the editors with her disarming expression of homespun politics that the independent encouraged further submissions. ofcorra harris  all harris's works, the most acclaimed was a circuit rider's wife, the first of a trilogy in the circuit rider series. a circuit rider's widow ( ) and my son ( ) followed. semiautobiographical, a circuit rider's wife is the story of itinerant methodist minister william thompson and his wife, mary, and their life together on a church circuit in the north georgia mountains. the novel received much attention when first published because harris alleged that itinerants and their families suffered needless hardships from the unfair distribution of resources to urban clerics. the book has been noted since that time for its portrayal of rural mountain folk in their earthiness and simplicity. it was reprinted in by the university of georgia press. less well known, though not less relevant for its social critique, is the recording angel ( ). this novel, set in a little town called ruckersville in the hills of north georgia, depicts a place where residents are so devoted to the legacy of their confederate heroes that they have isolated themselves and become culturally barren. harris mocks the lost cause mythology, and again she reveals the excesses and limitations of evangelical religion. this book, along with harris's first novel, reflects her efforts to come to terms with modernity. one of her works, the co-citizens ( ), illustrates especially well the paradoxical nature of harris's personality and politics. the protagonist is loosely based on rebecca latimer felton, a fellow georgian, and harris purportedly wrote the novel to illustrate support for the woman suffrage movement, though she was actually more ambivalent about than supportive of the movement. although many (including felton) accepted the co-citizens as a pro-suffrage statement, others read it as a barely veiled attack on feminism, a way of life harris lived in practice yet rejected in theory. harris's two autobiographies were quite acclaimed in their day. my book and heart ( ) was more popular with the public, though harris felt that as a woman thinks ( ) was her best and most satisfying work. during the s her publishing career was largely limited to the locally popular "candlelit column," a tri-weekly article in the atlanta journal. harris died of heart-related illness on february , . in harris was inducted into georgia women of achievement. you might also like i'd climb the highest mountain joel chandler harris ( - ) lamar trotti ( - ) more in fiction authors william tappan thompson ( - ) james kilgo ( - ) greg johnson (b. ) pam durban (b. ) destinations m- .jpg art across georgia m- .jpg fall in north georgia m- .jpg seven natural wonders of georgia m- .jpg ten major civil war sites in georgia media gallery: corra harris ( - ) hide caption corra harris lundy harris corra harris loading further reading catherine badura, "reluctant suffragist/unwitting feminist: the ambivalent political voice of corra harris," southeastern political review: women in southern united states politics (september ): - . walter blackstock, "corra harris: an analytical study of her novels," florida state university studies ( ): - . karen coffing, "corra harris and the saturday evening post: southern domesticity conveyed to a national audience, - ," georgia historical quarterly (summer ): - . donald mathews, "corra harris: the storyteller as folk preacher," in georgia women: their lives and times, vol. ., ed. ann short chirhart and betty wood (athens: university of georgia press, ). wayne mixon, "traditionalist and iconoclast: corra harris and southern writing - ," in developing dixie: modernization in a traditional society, ed. winfred b. moore jr., et al. (westport, conn.: greenwood press, ). catherine oglesby, corra harris and the divided mind of the new south (gainesville: university press of florida, ). ruby reeves, "corra harris: her life and works" (master's thesis, university of georgia, ). john e. talmadge, corra harris: lady of purpose (athens: university of georgia press, ). cite this article oglesby, catherine. "corra harris ( - )." new georgia encyclopedia. april . web. august . more from the web georgia women of achievement: corra mae white harris emory libraries: corra harris collection, - partner links digital library of georgia: georgia historic books: a circuit rider's wife, by corra harris digital library of georgia: georgia historic books: the co-citizens, by corra harris digital library of georgia: georgia historic books: eve's second husband, by corra harris digital library of georgia: georgia historic books: eyes of love, by corra harris digital library of georgia: georgia historic books: justice, by corra harris digital library of georgia: georgia historic books: making her his wife, by corra harris digital library of georgia: georgia historic books: the recording angel, by corra harris hargrett rare book and manuscript library: cora harris papers more in arts & culture heaven bound tobacco road and god's little acre atlanta college of art thomas addison richards ( - ) the sacred harp dale pierson hill john g. jensen wise blood vera kirk christopher kakas j. j. owen the b- 's john leadley dagg ( - ) raymond larmon arthur crew inman ( - ) eliza frances andrews ( - ) nge topics arts & culture government & politics business & economy history & archaeology counties, cities & neighborhoods science & medicine education sports & outdoor recreation geography & environment people from our home page georgia state parks georgia state parks georgia state parks and historic sites, a division of the department of natural resources, protects more than , acres of natural beauty at more than sixty parks and historic sites in the state read more... textile industry textile industry the rise of the textile industry in georgia was a significant historical development with a profound effect on the state's inhabitants. read more... art in georgia since art in georgia since the decade of the s was the beginning of an auspicious period, which has since continued and flourished unabated in the history read more... student nonviolent coordinating committee (sncc) student nonviolent coordinating committee (sncc) the student nonviolent coordinating committee, or sncc (pronounced "snick"), was one of the key organizations in the american read more... trending articles fictional treatments of sherman in georgia preservation laws singer-moye mounds thompson, ventulett, stainback, and associates (tvs) georgia, my state: economic understandings (second grade gse) j & j industries georgia community greenspace program hawkinsville gertrude "ma" rainey ( - ) riley puckett ( - ) dean rusk center georgia council on economic education interface, inc. susan thomas webster county stevens and wilkinson recent updates jane withers (b. ) updated / / georgia -h updated / / st. simons island updated / / georgia state parks updated / / featured georgia -h georgia -h is the primary youth development and outreach... august in georgia history a number of significant historical events have occurred in... about nge contact us contributor guidelines our content partners our sponsors our staff permissions facebook twitter instagram a program of georgia humanities in partnership with the university of georgia press, the university system of georgia/galileo, and the office of the governor. copyright - by georgia humanities and the university of georgia press. all rights reserved. site developed by ifthen. courtesy of hargrett rare book and manuscript library, university of georgia libraries none none gnu affero general public license - wikipedia gnu affero general public license from wikipedia, the free encyclopedia jump to navigation jump to search free software license based on the agplv and gplv this article is about the license published by free software foundation. for the licenses published by affero inc., see affero general public license. gnu affero general public license author free software foundation latest version publisher free software foundation, inc. published november , spdx identifier agpl- . -or-later agpl- . -only debian fsg compatible yes[ ] fsf approved yes[ ] osi approved yes[ ][ ] gpl compatible yes (permits linking with gplv )[ ] copyleft yes[ ] linking from code with a different licence only with gplv ; agpl terms will apply for the agpl part in a combined work.[ ][ ] website gnu.org/licenses/agpl.html the gnu affero general public license is a free, copyleft license published by the free software foundation in november , and based on the gnu general public license, version and the affero general public license. the free software foundation has recommended that the gnu agplv be considered for any software that will commonly be run over a network.[ ] the free software foundation explains the need for the license in the case when a free program is run on a server:[ ] the gnu affero general public license is a modified version of the ordinary gnu gpl version . it has one added requirement: if you run a modified program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the modified version running there. the purpose of the gnu affero gpl is to prevent a problem that affects developers of free programs that are often used on servers. the open source initiative approved the gnu agplv [ ] as an open source license in march after the company funambol submitted it for consideration through its ceo fabrizio capobianco.[ ] contents compatibility with the gpl examples of applications under gnu agpl criticism see also references external links compatibility with the gpl[edit] gnu agplv and gplv licenses each include clauses (in section of each license) that together achieve a form of mutual compatibility for the two licenses. these clauses explicitly allow the "conveying" of a work formed by linking code licensed under the one license against code licensed under the other license,[ ] despite the licenses otherwise not allowing relicensing under the terms of each other.[ ] in this way, the copyleft of each license is relaxed to allow distributing such combinations.[ ] examples of applications under gnu agpl[edit] main article: list of software under the gnu agpl stet was the first software system known to be released under the gnu agpl, on november , ,[ ] and is the only known program to be used mainly for the production of its own license. flask developer armin ronacher noted in that the gnu agpl is a "terrible success, especially among the startup community" as a "vehicle for dual commercial licensing", and gave humhub, mongodb, odoo, rethinkdb, shinken, slic r, sugarcrm, and wurfl as examples.[ ] mongodb dropped the agpl in late- in favor of the "server side public license" (sspl), a variation of gplv that requires those who provide "the program as a service", accessible to third-parties, must make the entire source code of all software used to facilitate the service available under the same license.[ ] the sspl has been rejected by the open source initiative and banned by both debian and the fedora project, who state that the license's intent is to discriminate against cloud computing providers offering services based on the software without purchasing its commercial license.[ ][ ] criticism[edit] héctor martín cantero has criticized the affero gpl for being an eula and causing side effects.[ ] see also[edit] list of software under the gnu agpl free software licensing gnu general public license gnu lesser general public license gnat modified general public license gpl linking exception gnu free documentation license list of software licenses comparison of free and open-source software licenses references[edit] ^ jaspert, joerg (november , ). "ftp.debian.org: is agplv dfsg-free?". the debian project. retrieved december , . ^ a b c d e f list of free-software licences on the fsf website: "we recommend that developers consider using the gnu agpl for any software which will commonly be run over a network." ^ a b "osi approved licenses". open source initiative. ^ "osi approved", licenses, tl;dr legal. ^ a b "licenses section ", gnu agplv , gnu project. ^ "why the affero gpl". the gnu project. ^ "funambol helps new agplv open source license gain formal osi approval" (press release). funambol. mar , . archived from the original on - - . ^ the gnu general public license v – gnu project – free software foundation (fsf) ^ kuhn, bradley m. (november , ). "stet and agplv ". software freedom law center. archived from the original on march , . retrieved june , . ^ ronacher, armin ( - - ). "licensing in a post copyright world". lucumr.pocoo.org. retrieved - - . the agplv was a terrible success, especially among the startup community that found the perfect base license to make dual licensing with a commercial license feasible. mongodb, rethinkdb, openerp, sugarcrm as well as wurfl all now utilize the agplv as a vehicle for dual commercial licensing. the agplv makes that generally easy to accomplish as the original copyright author has the rights to make a commercial license possible but nobody who receives the sourcecode itself through the aplv inherits that right. i am not sure if that was the intended use of the license, but that's at least what it's definitely being used for now. ^ "server side public license (sspl)". mongodb. retrieved - - . ^ vaughan-nichols, steven j. "mongodb "open-source" server side public license rejected". zdnet. retrieved - - . ^ "mongodb's licensing changes led red hat to drop the database from the latest version of its server os". geekwire. - - . retrieved - - . ^ "twitter profile of hector martin". twitter. retrieved - - . external links[edit] official website for gnu affero general public license (gnu agpl). smith, brett (november , ). "free software foundation releases gnu affero general public license version " (press release). smith, brett (march , ), gplv and software as a service – also includes info on version of the affero gpl. kuhn, bradley m. (march , ). "free software foundation announces support of the affero general public license, the first copyleft license for web services" (press release). v t e gnu project history gnu manifesto free software foundation europe india latin america history of free software licenses gnu general public license linking exception font exception gnu lesser general public license gnu affero general public license gnu free documentation license software gnu (variants) hurd linux-libre glibc bash coreutils findutils build system gcc binutils gdb grub gnome gnustep gimp jami gnu emacs gnu texmacs gnu octave gnu taler gnu r gsl gmp gnu electric gnu archimedes gnunet gnu privacy guard gnuzilla (icecat) gnu health gnumed gnu lilypond gnu go gnu chess gnash guix more... contributors alexandre oliva benjamin mako hill bradley m. kuhn brian fox federico heinz frédéric couchet georg c. f. greve john sullivan josé e. marchesi joshua gay kefah t. issa loïc dachary nagarjuna g. peter heath richard m. stallman other topics gnu/linux naming controversy revolution os free software foundation anti-windows campaigns defective by design v t e free software foundation people richard m. stallman gnu project gnu general public license gnu lesser general public license gnu affero general public license other projects free software directory fsf free software awards free software foundation anti-windows campaigns defective by design sister organizations fsf europe fsf latin america fsf india league for programming freedom see also comparison of linux distributions retrieved from "https://en.wikipedia.org/w/index.php?title=gnu_affero_general_public_license&oldid= " categories: free software foundation free and open-source software licenses copyleft software licenses hidden categories: articles with short description short description matches wikidata official website different in wikidata and wikipedia navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية català Čeština deutsch español esperanto euskara فارسی français 한국어 italiano עברית nederlands 日本語 polski português Русский simple english slovenčina svenska türkçe Українська tiếng việt 中文 edit links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement github - fcrepo-exts/fcrepo-aws-deployer: a terraform script for deploying fedora repository to aws. skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this organization all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} fcrepo-exts / fcrepo-aws-deployer notifications star fork a terraform script for deploying fedora repository to aws. apache- . license stars forks star notifications code issues pull requests actions projects wiki security insights more code issues pull requests actions projects wiki security insights main switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branches tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit   git stats commits files permalink failed to load latest commit information. type name latest commit message commit time elasticbeanstalk     .gitignore     license     readme.md     main.tf     variables.tf     view code fcrepo-aws-deployer requirements installation deploy fedora tear it down other variables readme.md fcrepo-aws-deployer a terraform script for automatically deploying a fedora repository to aws. by default, fedora is deployed on a t .small instance and is backed by postgresql . hosted in rds on a db.t .micro instance. requirements terraform (https://www.terraform.io/downloads.html) installation after installing terraform git clone https://github.com/fcrepo-exts/fcrepo-aws-deployer terraform init then set up an aws profile in ~/.aws/config (c.f. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) deploy fedora terraform apply -var 'aws_profile=' -var 'ec _keypair=' -var 'aws_artifact_bucket_name=' nb: make sure that the aws bucket you designate does not already exist and do not anything in that bucket that you do not want deleted on teardown. tear it down terraform destroy -var 'aws_profile=' -var 'ec _keypair=' other variables see ./variables.tf for a complete list of optional parameters. about a terraform script for deploying fedora repository to aws. resources readme license apache- . license releases no releases published packages no packages published contributors     languages hcl . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. none fail!lab fail!lab technology, libraries and the future! luddites, trumpism and change: a crossroads for libraries &# ;globalization is a proxy for technology-powered capitalism, which tends to reward fewer and fewer members of society.&# ; &# ; om malik corner someone and they will react. we may be seeing this across the world as change, globalization, technology and economic dislocation force more and more people into the corner of benefit-nots. they are reacting out [&# ;] is d printing dying? inc.&# ;s john brandon recently wrote about the slow, sad, and ultimately predictable decline of d printing. uh, not so fast. d printing is just getting started. for libraries whose adopted mission is to introduce people to emerging technologies, this is a fantastic opportunity to do so. but it has to be done right. another dead [&# ;] the state of the library website t&# ;was a time when the library website was an abomination. those dark days have lightened significantly. but new clouds have appeared on the horizon. darkest before the dawn in the dark ages of library websites, users suffered under ux regimes that were rigid, unhelpful and confusing. this was before responsive design became a standard in [&# ;] virtual realty is getting real in the library my library just received three samsung s devices with gear vr goggles. we put them to work right away. the first thought i had was: wow, this will change everything. my second thought was: wow, i can&# ;t wait for apple to make a vr device! the samsung gear vr experience is grainy and fraught with [&# ;] w c’s css framework review i&# ;m a longtime bootstrap fan, but recently i cheated on my old framework. now i&# ;m all excited by the w c&# ;s new framework. like bootstrap, the w c&# ;s framework comes with lots of nifty utilities and plug and play classes and ui features. even if you have a good cms, you&# ;ll find many of their code libraries [&# ;] ai first looking to the future, the next big step will be for the very concept of the “device” to fade away. over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. we will move from mobile first to an ai first world. google founder&# ;s letter, april my library [&# ;] google analytics and privacy collecting web usage data through services like google analytics is a top priority for any library. but what about user privacy? most libraries (and websites for that matter) lean on google analytics to measure website usage and learn about how people access their online content. it&# ;s a great tool. you can learn about where people [&# ;] the l word i&# ;ve been working with my team on a vision document for what we want our future digital library platform to look like. this exercise keeps bringing us back to defining the library of the future. and that means addressing the very use of the term, &# ;library.&# ; when i first exited my library (and information science) [&# ;] locking down windows i&# ;ve recently moved back to windows for my desktop computing. but windows comes with enormous privacy and security issues that people need to take into account&# ;and get under a semblance of control. here&# ;s how i did it. there has been much written on this subject, so what i&# ;m including here is more of a [&# ;] killer apps & hacks for windows did the ux people at microsoft ever test windows ? here are some must have apps and hacks i&# ;ve found to make life on windows quick and easy. set hotkeys for apps sometimes you just want to launch an app from your keyboard. using a method on laptopmag.com, you can do this for most [&# ;] ma rainey - wikipedia ma rainey from wikipedia, the free encyclopedia jump to navigation jump to search african-american singer ma rainey rainey in background information birth name gertrude pridgett born ( - - )april , columbus, georgia, u.s. died december , ( - - ) (aged  ) columbus, georgia, u.s. genres blues classic female blues occupation(s) vocalist years active – labels paramount associated acts rainey and rainey assassinators of the blues rabbit foot minstrels bessie smith louis armstrong thomas dorsey musical artist gertrude "ma" rainey (née pridgett; april , – december , )[ ][ ][ ] was an influential american blues singer and early blues recording artist.[ ] dubbed the "mother of the blues", she bridged earlier vaudeville and the authentic expression of southern blues, influencing a generation of blues singers.[ ] gertrude pridgett began performing as a teenager and became known as "ma" rainey after her marriage to will "pa" rainey in . they toured with the rabbit foot minstrels and later formed their own group, rainey and rainey, assassinators of the blues. her first recording was made in . in the following five years, she made over recordings, including "bo-weevil blues" ( ), "moonshine blues" ( ), "see see rider blues" ( ), "ma rainey's black bottom" ( ), and "soon this morning" ( ).[ ] rainey was known for her powerful vocal abilities, energetic disposition, majestic phrasing, and a "moaning" style of singing. her qualities are present and most evident in her early recordings "bo-weevil blues" and "moonshine blues". rainey recorded with thomas dorsey and louis armstrong, and she toured and recorded with the georgia jazz band. she toured until , when she largely retired from performing and continued as a theater impresario in her hometown of columbus, georgia, until her death four years later.[ ] contents early life early career recording career personal life and death legacy and honors in popular culture recordings notes references . footnotes . sources further reading external links early life[edit] there is uncertainty about the birth date of gertrude pridgett. some sources indicate that she was born in , while most sources assert that she was born on april , .[ ] pridgett claimed to have been born on april , (beginning with the census, taken april , ), in columbus, georgia.[ ] however, the census indicates that she was born in september in alabama, and researchers bob eagle and eric leblanc suggest that her birthplace was in russell county, alabama.[ ][ ] she was the second of five children of thomas and ella (née allen) pridgett, from alabama. she had at least two brothers and a sister, malissa pridgett nix.[ ] in february , ma rainey married william "pa" rainey.[ ] she took on the stage name "ma rainey", which was "a play on her husband’s nickname, 'pa'".[ ] early career[edit] pridgett began her career as a performer at a talent show in columbus, georgia, when she was approximately to years old.[ ][ ] a member of the first african baptist church, she began performing in black minstrel shows. she later claimed that she was first exposed to blues music around .[ ] she formed the alabama fun makers company with her husband, will rainey, but in they both joined pat chappelle's much larger and more popular rabbit's foot company, where they were billed together as "black face song and dance comedians, jubilee singers [and] cake walkers".[ ] in , she was described as "mrs. gertrude rainey, our coon shouter".[ ] she continued with the rabbit's foot company after it was taken over by a new owner, f. s. wolcott, in .[ ] rainey said she found "blues music" when she was in missouri one night performing, and a girl introduced her to a sad song about a man leaving a woman. rainey said she learned the lyrics of the song and added it to her performances. rainey claimed she created the term "blues" when asked what kind of song she was singing.[ ] beginning in , the raineys were billed as rainey and rainey, assassinators of the blues. wintering in new orleans, she met numerous musicians, including joe "king" oliver, louis armstrong, sidney bechet and pops foster. as the popularity of blues music increased, she became well known.[ ] around this time, she met bessie smith, a young blues singer who was also making a name for herself.[a] a story later developed that rainey kidnapped smith, forced her to join the rabbit's foot minstrels, and taught her to sing the blues; the story was disputed by smith's sister-in-law maud smith.[ ] recording career[edit] rainey and the band from the late s, there was an increasing demand for recordings by black musicians.[ ] in , mamie smith was the first black woman to be recorded.[ ] in , rainey was discovered by paramount records producer j. mayo williams. she signed a recording contract with paramount, and in december she made her first eight recordings in chicago,[ ] including "bad luck blues", "bo-weevil blues" and "moonshine blues". she made more than other recordings over the next five years, which brought her fame beyond the south.[ ][ ] paramount marketed her extensively, calling her the "mother of the blues", the "songbird of the south", the "gold-neck woman of the blues" and the "paramount wildcat".[ ] in , rainey recorded with louis armstrong, including on "jelly bean blues", "countin' the blues" and "see, see rider".[ ] in the same year, she embarked on a tour of the theater owners booking association (toba) in the south and midwest of the united states, singing for black and white audiences.[ ] she was accompanied by the bandleader and pianist thomas dorsey and the band he assembled, the wildcats jazz band.[ ] they began their tour with an appearance in chicago in april and continued, on and off, until .[ ] dorsey left the group in because of ill health and was replaced as pianist by lillian hardaway henderson, the wife of rainey's cornetist fuller henderson, who became the band's leader.[ ] although most of rainey's songs that mention sexuality refer to love affairs with men, some of her lyrics contain references to lesbianism or bisexuality,[ ] such as the song "prove it on me": they said i do it, ain't nobody caught me. sure got to prove it on me. went out last night with a crowd of my friends. they must've been women, 'cause i don't like no men. it’s true i wear a collar and tie. makes the wind blow all the while.[ ] according to the website queerculturalcenter.org, the lyrics refer to an incident in in which rainey was "arrested for taking part in an orgy at [her] home involving women in her chorus".[ ] the political activist and scholar angela y. davis noted that "'prove it on me' is a cultural precursor to the lesbian cultural movement of the s, which began to crystallize around the performance and recording of lesbian-affirming songs."[ ] at the time, an ad for the song embraced the genderbending outlined in the lyrics and featured rainey in a three-piece suit, mingling with women while a police officer lurks nearby.[ ] unlike many blues singers of her day, rainey wrote at least a third of the songs she sang including many of her most famous works such as "moonshine blues" and "ma rainey's black bottom" which would become standards of the "classic blues" genre.[ ] throughout the s, ma rainey had a reputation for being one of the most dynamic performers in the united states due in large part to her songwriting, showmanship and voice.[ ] she and her band could fetch earnings of $ a week on tour with the theater owners’ booking association which was double that of bessie brown and george williams while a little over half what bessie smith would ultimately command.[ ] toward the end of the s, live vaudeville went into decline, being replaced by radio and recordings.[ ] rainey's career was not immediately affected; she continued recording for paramount and earned enough money from touring to buy a bus with her name on it.[ ] in , she worked with dorsey again and recorded songs, before paramount terminated her contract.[ ] her style of blues was no longer considered fashionable by the label.[ ] it is unclear if she maintained the royalties to her songs after she was fired from paramount.[ ] personal life and death[edit] ma rainey and pa rainey adopted a son named danny who later joined his parents' musical act. rainey developed a relationship with bessie smith. they became so close that rumors circulated that their relationship was possibly also romantic in nature.[ ] it was also rumored that smith once bailed ma rainey out of jail.[ ] the raineys separated in .[ ][ ] in , rainey returned to her home town, columbus, georgia, and became the proprietor[ ] of three theatres, the liberty in columbus, and the lyric and the airdrome in rome, georgia,[ ] until her death. she died of a heart attack in .[ ][ ][ ] legacy and honors[edit] ma rainey created what is now known as "classic blues" while also portraying black life like never before. as a musical innovator she built on the minstrelsy and vaudeville performative traditions with comedic timing and a hybrid of american blues traditions she encountered in her vast tours across the country. she helped to pioneer a genre that appealed to north and south, rural and urban audiences.[ ] her signature low and gravelly voice sung with rainey's gusto and authoritative style inspired imitators from louis armstrong, janis joplin and bonnie raitt among others.[ ] in her lyrics, rainey portrayed the black female experience like few others of the time reflecting a wide range of emotions and experiences. in her book blues legacies and black feminism, angela davis wrote that rainey's songs are full of women who “explicitly celebrate their right to conduct themselves as expansively and even as undesirably as men".[ ] in her songs, she and other black women sleep around for revenge, drink and party all night and generally live lives that "transgressed these ideas of white middle class female respectability".[ ] the portrayals of black female sexuality, including those bucking heteronormative standards, fought ideas of what a woman should be and inspired alice walker in developing her characters for the color purple.[ ] bragging about sexual escapades was popular in men's songs at the time but her use of these themes in her works established her as both fiercely independent and fearless and many have drawn connections between her use of these themes and their modern use in hip-hop.[ ] rainey was also a fashion icon who pioneered flashy, expensive costuming in her performances, wearing ostrich plumes, satin gowns, sequins, gold necklaces, diamond tiaras, and gold teeth.[ ] rainey was inducted into the blues foundation's hall of fame in and the rock and roll hall of fame in .[ ] in , the u.s. post office issued a -cent commemorative postage stamp honoring her. in , "see see rider blues" (performed in ) was inducted into the grammy hall of fame and was added to the national recording registry by the national recording preservation board of the library of congress.[ ] there was also a small museum opened in columbus in to honor ma rainey's legacy. it is in the very house that she had built for her mother and later lived in from until her death in .[ ] the first annual ma rainey international blues festival was held in april in columbus, georgia, near the home that rainey owned and lived in at the time of her death.[ ][ ] in , the rainey-mccullers school of the arts opened in columbus, georgia, named in honor of rainey and author carson mccullers.[ ] in popular culture[edit] sterling a. brown wrote the poem "ma rainey" in , about how "when ma rainey / comes to town" people everywhere would hear her sing. in , sandra lieb wrote the first full-length book about rainey, mother of the blues: a study of ma rainey.[ ] ma rainey's black bottom, a play by august wilson, is a fictionalized account of a recording of her song of the same title set in . theresa merritt and whoopi goldberg starred as rainey in the original and revival broadway productions, respectively. viola davis portrayed rainey in the film adaptation of the play and was nominated for the academy award for best actress.[ ] mo'nique played rainey in the television film bessie about the life of bessie smith, for which she earned a nomination for primetime emmy award for outstanding supporting actress in a limited series or movie.[ ] recordings[edit] this sortable table presents all titles recorded by rainey.[ ] the recording dates are approximated. the classification, by sandra lieb, is almost entirely by form. blues songs which are only partly of twelve-bar structure are classified as mixtures of blues and popular song forms. songs without any twelve-bar or eight-bar structure are classified as non-blues.[ ] the jsp and docd columns refer to the two complete cd reissues.[ ][ ] click any label to sort. to return to chronological order, click #. # matrix recording date title accompaniment paramount issue no. sandra lieb classification jsp document docd notes / "bad luck blues" lovie austin blues serenaders twelve-bar blues a / "bo-weavil blues" lovie austin blues serenaders mixture of blues and popular song forms a another take on jsp & docd / "barrel house blues" lovie austin blues serenaders twelve-bar blues a / "those all night long blues" lovie austin blues serenaders non-blues a another take on jsp & docd / "moonshine blues" lovie austin blues serenaders mixture of blues and popular song forms a / "last minute blues" lovie austin blues serenaders twelve-bar blues a / "southern blues" lovie austin blues serenaders twelve-bar blues a / "walking blues" lovie austin blues serenaders twelve-bar blues a / "lost wandering blues" pruit twins twelve-bar blues a / "dream blues" pruit twins twelve-bar blues a / "honey where you been so long?" lovie austin blues serenaders non-blues a / "ya-da-do" her georgia jazz band non-blues a another take on jsp & docd / "those dogs of mine" "(famous cornfield blues)" lovie austin blues serenaders non-blues a / "lucky rock blues" lovie austin blues serenaders mixture of blues and popular song forms a / "south bound blues" her georgia jazz band non-blues a / "lawd send me a man blues" her georgia jazz band non-blues a / "ma rainey's mystery record" lovie austin blues serenaders twelve-bar blues a / "shave 'em dry blues" two unknown guitars eight-bar blues b / "farewell daddy blues" unknown guitar twelve-bar blues b / "booze and blues" her georgia jazz band twelve-bar blues b / "toad frog blues" her georgia jazz band twelve-bar blues b / "jealous hearted blues" her georgia jazz band twelve-bar blues b / "see see rider blues" her georgia jazz band mixture of blues and popular song forms b with louis armstrong; another take on jsp & docd / "jelly bean blues" her georgia jazz band mixture of blues and popular song forms b with louis armstrong / "countin' the blues" her georgia jazz band twelve-bar blues b with louis armstrong; another take on jsp & docd / "cell bound blues" her georgia jazz band mixture of blues and popular song forms b / "army camp harmony blues" her georgia jazz band twelve-bar blues b another take on jsp & docd / "explaining the blues" her georgia jazz band twelve-bar blues b another take on jsp & docd / "louisiana hoo doo blues" her georgia jazz band twelve-bar blues b / "goodbye daddy blues" her georgia jazz band mixture of blues and popular song forms b / "stormy seas blues" her georgia band twelve-bar blues b another take on docd / "rough and tumble blues" her georgia band twelve-bar blues b / "night time blues" her georgia band twelve-bar blues b another take on jsp & docd / "levee camp moan" her georgia band non-blues b / "four day honorary scat" her georgia band non-blues b misprint for "'fore day"; another take on jsp & docd / "memphis bound blues" her georgia band twelve-bar blues b / "slave to the blues" her georgia band twelve-bar blues c / "yonder come the blues" her georgia band non-blues c / "titanic man blues" her georgia band mixture of blues and popular song forms c another take on jsp & docd / "chain gang blues" her georgia band twelve-bar blues c / "bessemer bound blues" her georgia jazz band twelve-bar blues c another take on jsp & docd / "oh my babe blues" her georgia band non-blues c / "wringing and twisting blues" her georgia band non-blues c / "stack o'lee blues" her georgia band ballad c / "broken hearted blues" her georgia band twelve-bar blues c another take on docd / "jealousy blues" her georgia band non-blues c another take on docd / "seeking blues" her georgia band mixture of blues and popular song forms c another take on jsp & docd / "mountain jack blues" jimmy blythe (piano) twelve-bar blues c another take on jsp & docd / "down in the basement" her georgia band non-blues c / "sissy blues" her georgia band twelve-bar blues c / "broken soul blues" her georgia band non-blues c / "trust no man" lillian henderson (piano) non-blues c / "morning hour blues" jimmy blythe (piano) blind blake (guitar) twelve-bar blues d / "weepin' woman blues" her georgia boys twelve-bar blues d / "soon this morning" her georgia band twelve-bar blues d / "little low mamma blues" blind blake (guitar) possibly leroy picket (violin) twelve-bar blues d / "grievin hearted blues" blind blake (guitar) possibly leroy picket (violin) mixture of blues and popular song forms d / "don't fish in my sea" jimmy blythe (piano) twelve-bar blues d / "big boy blues" her georgia band twelve-bar blues d / "blues oh blues" her georgia band non-blues d / "damper down blues" her georgia band twelve-bar blues d / "gone daddy blues" her georgia band mixture of blues and popular song forms d / "oh papa blues" her georgia band non-blues d / "misery blues" her georgia band non-blues d / "dead drunk blues" her georgia band twelve-bar blues d / "slow driving moan" her georgia band mixture of blues and popular song forms d / "blues the world forgot—part " her georgia band comedy d / "ma rainey's black bottom" her georgia band non-blues d / "blues the world forgot—part " her georgia band comedy d / "hellish rag" her georgia band non-blues d / "georgia cake walk" her georgia band comedy d / "new bo-weavil blues" her georgia band mixture of blues and popular song forms d / "moonshine blues" her georgia band mixture of blues and popular song forms d / "ice bag papa" her georgia band non-blues d / "black cat hoot owl blues" her tub jug washboard band twelve-bar blues e band led by georgia tom / "log camp blues" her tub jug washboard band twelve-bar blues e band led by georgia tom / "hear me talking to you" her tub jug washboard band twelve-bar blues e band led by georgia tom / "hustlin' blues" her tub jug washboard band twelve-bar blues e band led by georgia tom / "prove it on me blues" her tub jug washboard band non-blues e band led by georgia tom / "victim of the blues" her tub jug washboard band twelve-bar blues e band led by georgia tom / "traveling blues" her tub jug washboard band twelve-bar blues e band led by georgia tom; another take on jsp and docd / "deep moaning blues blues" her tub jug washboard band twelve-bar blues e band led by georgia tom another take on jsp & docd / "daddy goodbye blues" georgia tom dorsey (piano) tampa red (guitar) eight-bar blues e / "sleep talking blues" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e another take on jsp & docd / "tough luck blues" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e / "blame it on the blues" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e / "sweet rough man" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e / "runaway blues" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e / "screech owl blues" eddie miller (piano) twelve-bar blues e / "black dust blues" eddie miller (piano) twelve-bar blues e / "leaving this morning" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e / "black eye blues" georgia tom dorsey (piano) tampa red (guitar) twelve-bar blues e another take on jsp & docd / "ma and pa poorhouse blues" papa charlie jackson (duet & banjo) twelve-bar blues e / "big feeling blues" papa charlie jackson (duet & banjo) twelve-bar blues e notes[edit] ^ sources are unclear on the exact date and circumstances under which rainey and smith met, but it was probably sometime between and .[ ] references[edit] footnotes[edit] ^ a b c d e oliver, paul, "rainey, ma (née pridgett, gertrude)", grove dictionary of music and musicians, oxford university press, retrieved april ^ a b "ma rainey | biography, songs, & facts". encyclopedia britannica. ^ a b c s (december , ). "the true story of ma rainey from netflix's 'ma rainey's black bottom'". women's health. ^ southern, eileen ( ). the music of black americans: a history ( rd ed.). w. w. norton. isbn  - - - . ^ russonello, giovanni ( - - ). "overlooked no more: ma rainey, the 'mother of the blues'". the new york times. issn  - . retrieved - - . ^ lieb, sandra ( ). mother of the blues: a study of ma rainey ( rd ed.). university of massachusetts press. isbn  - - - . ^ a b lieb, p. ^ eagle, bob; leblanc, eric s. ( ). blues: a regional experience. santa barbara, california: praeger publishers. p.  . isbn  - . ^ census for columbus ward , muscogee, georgia, district , enumeration district , sheet a, line , 'prigett, gertrude, sept , . ^ a b c ma rainey. women in world history: a biographical encyclopedia. encyclopedia.com. updated . retrieved november , . ^ jaxson, the. "ma rainey: the mother of the blues". www.thejaxsonmag.com. ^ lieb, p. ^ robert palmer ( ). deep blues. penguin books. p.  . isbn  - - - - . ^ a b abbott, lynn; seroff, doug ( ). ragged but right: black traveling shows, coon songs, and the dark pathway to blues and jazz. university press of mississippi. p. . ^ lieb, p. ^ a b lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ a b lieb, p. ^ friederich, brandon (june , ). "ma rainey's lesbian lyrics: times she expressed her queerness in song". billboard. retrieved june , . ^ ellison, marvin m.; brown douglas, kelly, eds. ( ). sexuality and the sacred: sources for theological reflection ( nd ed.). westminster john knox press. p.  . isbn  - . ^ a b "gladys bentley". queerculturalcenter.org. archived from the original on november , . retrieved december , . ^ davis, angela y. ( ). blues legacies and black feminism: gertrude "ma" rainey, bessie smith, and billie holiday. vintage. pp.  , . isbn  - . ^ a b c d e f g "ma rainey is best known as a pioneer of the blues. but she broke more than musical barriers". time. retrieved - - . ^ abbott, lynn ( ). the original blues: the emergence of the blues in african american vaudeville. university press of mississippi. isbn  . ^ lieb, p. ^ lieb, p. ^ lieb, p. ^ "who is ma rainey? how the 'mother of the blues' became an icon". entertainment tonight. ^ "overlooked no more: ma rainey, the 'mother of the blues' (published )". the new york times. - - . issn  - . retrieved - - . ^ lieb, p. ^ santelli, robert. the big book of blues. penguin books. p. . ^ "ma rainey". britannica.com. - - . retrieved - - . ^ davis, angela ( ). blues legacies and black feminism: gertrude "ma" rainey, bessie smith, and billie holiday. penguin random house. isbn  - . ^ mack, kimberly ( ). fictional blues: narrative self-invention from bessie smith to jack white. university of massachusetts press. isbn  . ^ freedman, samuel g. ( - - ). "what black writers owe to music". the new york times. issn  - . retrieved - - . ^ jones, dalyah ( - - ). ""let's have a sex talk": the eras of sex talk by black women in hip-hop". okayplayer. retrieved - - . ^ ma rainey induction year: . rockhall.com. accessed february , . ^ national recording registry choices. loc.gov/rr. a ccessed february , . ^ "ma rainey | biography, songs, & facts". encyclopedia britannica. retrieved - - . ^ "ma rainey international blues festival - mad about ma blues society". retrieved july . ^ "ma rainey international blues festival". january . archived from the original on january . retrieved july . ^ "rainey-mccullers school of the arts opens as - classes begin". ledger-enquirer.com. retrieved july . ^ lieb, sandra ( ). mother of the blues: a study of ma rainey. university of massachusetts. isbn  . ^ lee, benjamin ( october ). "netflix releases trailer for chadwick boseman's final movie". the guardian. retrieved october . ^ "mo'nique on emmy nomination for 'bessie,' lee daniels' 'empire' snub: 'what you put out is what you get back'". the wrap. - - . retrieved - - . ^ dixon, robert m. w.; godrich, john; and rye, howard w. (compilers) ( ). blues and gospel records – . oxford university press. isbn  - . ^ lieb, pp. – . ^ ma rainey. mother of the blues. -cd box set. jsp records jsp (a–e). ^ ma rainey. complete recorded works in chronological order, vol. : december to c. august , document records docd . complete recorded works in chronological order, vol. : c. october to c. august , document docd . complete recorded works in chronological order, vol. : c. december to c. june , document docd . complete recorded works in chronological order, vol. : c. november to c. december , document docd . the complete sessions in chronological order, document docd . too late, too late, vol. : – , document docd . too late, too late, vol. : – , document docd . too late, too late, vol. : – , document docd . sources[edit] lieb, sandra ( ). mother of the blues: a study of ma rainey. university of massachusetts press. isbn  - - - . davis, angela y. ( ). blues legacies and black feminism. pantheon. isbn  - - -x. further reading[edit] ma rainey and the classic blues singers by derrick stewart-baxter (stein and day, ) isbn  - external links[edit] ma rainey blues festival official website gertrude “ma” rainey at the new georgia encyclopedia ma rainey discography at discogs ma rainey at allmusic ma rainey at imdb ma rainey ( - ) at red hot jazz archive ma rainey at find a grave awards for ma rainey v t e georgia women of achievement s martha berry lucy craft laney juliette gordon low flannery o'connor dicksie bradley bandy mary musgrove cassandra pickett durham viola ross napier ma rainey julia flisch carson mccullers margaret mitchell ruth hartley mosley emily harvie thomas tubman selena sloan butler anna colquitt hunter hazel jane raines susan cobb milton atkinson nellie peters black ellen craft corra harris lugenia burns hope rebecca latimer felton mary ann harris gay nancy hart lucy barrow mcintire lettie pate whitehead evans julia collier harris rhoda kaufman carrie steele logan moina michael lillian smith s sallie ellis davis laura askew haygood ellen axson wilson julia l. coleman catherine evans whitener wessie gertrude connell lula dobbs mceachern alice harrell strickland madeleine kiker anthony helena maud brown cobb julia lester dillon leila ross wilburn mathilda beasley louise frederick hays helen dortch longstreet sarah mclendon murphy emily barnelia woodward alice woodby mckane nina anderson pape jeannette rankin eliza frances andrews grace towns hamilton sarah porter hillhouse margaret o. bynum edith lenora foster helen douglas mankin sara branham matthews elfrida de renne barrow amilee chastain graves susan dowdell myrick caroline pafford miller jane hurt yarn harriet powers s mary ann lipscomb celestine sibley madrid williams lillian gordy carter mary francis hill coley may dubignon stiles howard sarah randolph bailey beulah rucker oliver ethel harpst lollie belle wylie mary gregory jewett henrietta stanley dull rebecca stiles taylor ella gertrude clanton thomas bazoline estelle usher allie carroll hart frances freeborn pauley nell kendall hodgson woodruff sarah harper heard ellamae ellis league katie hall underwood carolyn mackenzie carter clermont huger lee lucile nix ludie clay andrews susie baker king taylor mamie george s. williams leila denmark mary dorothy lyndon s clarice cross bagwell katharine dupre lumpkin juanita marsh jean elizabeth geiger wright ruby m. anderson mary g. bryan laura pope forester allie murray smith v t e rock and roll hall of fame – class of performers hank ballard bobby darin the four seasons tom devito, bob gaudio, nick massi, frankie valli four tops renaldo benson, abdul "duke" fakir, lawrence payton, levi stubbs the kinks mick avory, dave davies, ray davies, pete quaife the platters david lynch, herb reed, paul robi, zola taylor, tony williams simon & garfunkel art garfunkel, paul simon the who roger daltrey, john entwistle, keith moon, pete townshend early influences louis armstrong charlie christian ma rainey non-performers (ahmet ertegun award) gerry goffin and carole king holland–dozier–holland authority control general integrated authority file (germany) isni viaf worldcat national libraries norway spain france (data) italy united states czech republic australia israel netherlands poland other faceted application of subject terminology musicbrainz artist rero (switzerland) social networks and archival context sudoc (france) trove (australia) ." women in world history: a biographical encyclopedia. . encyclopedia.com. oct. . ( , november ). retrieved november , , from rainey, ma ( – ) | encyclopedia.com retrieved from "https://en.wikipedia.org/w/index.php?title=ma_rainey&oldid= " categories: s births deaths th-century american singers th-century american women singers age controversies american blues singers american street performers african-american female singers baptists from georgia (u.s. state) bisexual musicians bisexual women classic female blues singers lgbt african americans lgbt people from georgia (u.s. state) lgbt people from alabama musicians from columbus, georgia paramount records artists people from rome, georgia vaudeville performers lgbt singers from the united states th-century baptists hidden categories: articles with short description short description is different from wikidata articles with hcards infobox musical artist with missing or invalid background field wikipedia articles with gnd identifiers wikipedia articles with isni identifiers wikipedia articles with viaf identifiers wikipedia articles with bibsys identifiers wikipedia articles with bne identifiers wikipedia articles with bnf identifiers wikipedia articles with iccu identifiers wikipedia articles with lccn identifiers wikipedia articles with nkc identifiers wikipedia articles with nla identifiers wikipedia articles with nli identifiers wikipedia articles with nta identifiers wikipedia articles with plwabn identifiers wikipedia articles with fast identifiers wikipedia articles with musicbrainz identifiers wikipedia articles with rero identifiers wikipedia articles with snac-id identifiers wikipedia articles with sudoc identifiers wikipedia articles with trove identifiers wikipedia articles with worldcatid identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikimedia commons languages العربية تۆرکجه català deutsch Ελληνικά español esperanto فارسی français frysk galego Հայերեն bahasa indonesia italiano עברית magyar مصرى nederlands norsk bokmål norsk nynorsk polski português Русский simple english Српски / srpski suomi svenska Українська edit links this page was last edited on june , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement ‘news item’ and ‘résumé’ enter public domain january | dorothy parker society dorothy parker society official dorothy parker site since menu about society homes los angeles haunts gallery audio book shop dorothy parker books algonquin round table books collections plays & study guides audio books dvd dorothy parker merchandise tours dorothy parker upper west side algonquin round table the guide contact menu ‘news item’ and ‘résumé’ enter public domain january posted on december , february , by kevin fitzpatrick do you celebrate new year’s day or public domain day? for dorothy parker fans, why not both? just as we published last year, turning the calendar pages of u.s. copyright law, on january , , more works of art, film, music, poetry, and writing will enter the public domain. this milestone will bring out work published in that copyrights have been lifted. while in some quarters the news that the great gatsby is now out of copyright will be celebrated, dorothy parker makes the list with poems, including her “greatest hits” collection. this is colloquially public domain day. what is happening, how u.s. law is interpreted, and what the hell sonny bono and mickey mouse have to do with copyright law is explained here. what this means is that what parker was doing in during the speakeasy era matters in the covid era. parker published poems and free verse in that are now able to be used without paying her estate, which is controlled by the naacp. incredible as it may sound, this also includes the most famous ones that she gave away to her friend and mentor, franklin p. adams, to be published in the new york world on the same day on august , , in his “conning tower” column. under parker’s heading “some beautiful letters” were six of her most beloved pieces: “observation,” “social note,” “news item” (“men seldom make passes/at girls who wear glasses”), “interview,” “comment,” and possibly parker’s most well-known, “résumé.” these can now be used in any manner; some are already on tattoos, of course. “observation” attained some acclaim when it was included in mrs. parker and the vicious circle, performed by jennifer jason leigh. observation if i don’t drive around the park, i’m pretty sure to make my mark. if i’m in bed each night by ten, i may get back my looks again, if i abstain from fun and such, i’ll probably amount to much, but i shall stay the way i am, because i do not give a damn. these are the dorothy parker poems that will enter the public domain in the united states on january , . all were published for the first time in and the copyright will expire. “song of perfect propriety” “balto” “cassandra drops into verse” “i shall come back” “biographies” “a dream lies dead” “story of mrs. w–” “little song” “braggart” “epitaph” “threnody” “epitaph for a darling lady” “some beautiful letters”: “observation,” “social note,” “news item,” “interview,” “comment,” “résumé” “convalescent” “wail” “testament” “recurrence” “august” “hearthside” “rainy night” the only other parker published writing in were a few reviews for the new yorker, which debuted in february . this means everything from the first year of the magazine also enters the public domain on january . (if you produce any coffee mugs or tote bags, please send them our way). in parker did not sell any short stories or essays, as far as we know. other books by big names besides f. scott fitzgerald will also be out of copyright. these include: theodore dreiser’s an american tragedy, ernest hemingway’s in our time, john dos passos’, manhattan transfer, and virginia woolf’s mrs. dalloway. among the films are harold lloyd’s the freshman and the merry widow, and buster keaton’s go west, his people, and lovers in quarantine (extremely appropriate today, even if this film’s quarantine is only one week). the big parade (directed by king vidor), the first major wwi movie and the biggest box office success of the decade, will also be public domain material. so is charlie chaplin’s short, the gold rush. edward hopper, house by the railroad (museum of modern art collection) among the hundreds, if not thousands, of pieces of music, are “always,” by irving berlin, “sweet georgia brown,” by ben bernie, maceo pinkard & kenneth casey, works by gertrude ‘ma’ rainey, the “mother of the blues,” including “army camp harmony blues” (with hooks tilford) and “shave ’em dry” (with william jackson). in visual art, paintings include edward hopper’s house by the railroad (owned by the museum of modern art, new york) and picasso’s les trois danseuses (the three dancers) at the tate gallery, london. public domain day ties into the first two aims of the dorothy parker society, founded in : “to promote the work of dorothy parker” and “to introduce new readers to the work of dorothy parker.” while is good for public domain works, looks to be special too. copyright naacp poetry the new yorker search for: recent news radio interview with dorothy parker and richard lamparski august , from boats to speakeasies, a brief history of parkerfest august , parkerfest : brooklyn, the bronx, manhattan, august - july , zabou breitman brings “dorothy” to avignon, answers q&a july , dorothy parker memorial fund launches june , al hirschfeld foundation supports dorothy parker memorial fund with exclusive t-shirt june , support campaign for dorothy parker gravestone begins today with new york distilling company and the al hirschfeld foundation june , new york post breaks story about dorothy parker gin funding gravestone fund june , balto, the dog sculpture hero of central park february , ‘news item’ and ‘résumé’ enter public domain january december , archives - archives - select month august  ( ) july  ( ) june  ( ) february  ( ) december  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) june  ( ) may  ( ) october  ( ) august  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) september  ( ) july  ( ) june  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) september  ( ) august  ( ) july  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) june  ( ) may  ( ) march  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) may  ( ) march  ( ) february  ( ) january  ( ) december  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) march  ( ) february  ( ) january  ( ) november  ( ) august  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) march  ( ) february  ( ) december  ( ) november  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) january  ( ) december  ( ) november  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) october  ( ) july  ( ) may  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) february  ( ) january  ( ) december  ( ) august  ( ) june  ( ) december  ( ) november  ( ) october  ( ) august  ( ) june  ( ) february  ( ) december  ( ) november  ( ) september  ( ) august  ( ) may  ( ) april  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) december  ( ) november  ( ) october  ( ) september  ( ) august  ( ) july  ( ) june  ( ) may  ( ) april  ( ) march  ( ) february  ( ) january  ( ) social a vicious circle algonquin hotel algonquin round table dorothy parker on facebook dorothy parker society los angeles dorothy parker society seattle franklin p. adams george s. kaufman heywood broun robert benchley society friend sites don marquis fitzgerald society forgotten ny great gatsby boat tour literary manhattan liza donnelly nat benchley natalie ascencios ring lardner tallulah bankhead © dorothy parker society | powered by minimalist blog wordpress theme github - dtolnay/anyhow: flexible concrete error type built on std: :error skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} dtolnay / anyhow notifications star . k fork flexible concrete error type built on std::error::error view license . k stars forks star notifications code issues pull requests actions security insights more code issues pull requests actions security insights master switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branch tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit dtolnay ignore unhelpful clippy lint in build script … b d aug , ignore unhelpful clippy lint in build script b d git stats commits files permalink failed to load latest commit information. type name latest commit message commit time .github/workflows add additional builds on . and . validating addr_of codepath mar , src release . . aug , tests resolve default_trait_access clippy lint in pr jul , .clippy.toml inform clippy of supported compiler version in clippy.toml dec , .gitignore initial commit oct , cargo.toml release . . aug , license-apache dual mit or apache license oct , license-mit dual mit or apache license oct , readme.md update mascot mar , build.rs ignore unhelpful clippy lint in build script aug , view code anyhow ¯\_(°ペ)_/¯ details no-std support comparison to failure comparison to thiserror license readme.md anyhow ¯\_(°ペ)_/¯ this library provides anyhow::error, a trait object based error type for easy idiomatic error handling in rust applications. [dependencies] anyhow = " . " compiler support: requires rustc . + details use result, or equivalently anyhow::result, as the return type of any fallible function. within the function, use ? to easily propagate any error that implements the std::error::error trait. use anyhow::result; fn get_cluster_info() -> result { let config = std::fs::read_to_string("cluster.json")?; let map: clustermap = serde_json::from_str(&config)?; ok(map) } attach context to help the person troubleshooting the error understand where things went wrong. a low-level error like "no such file or directory" can be annoying to debug without more context about what higher level step the application was in the middle of. use anyhow::{context, result}; fn main() -> result<()> { ... it.detach().context("failed to detach the important thing")?; let content = std::fs::read(path) .with_context(|| format!("failed to read instrs from {}", path))?; ... } error: failed to read instrs from ./path/to/instrs.json caused by: no such file or directory (os error ) downcasting is supported and can be by value, by shared reference, or by mutable reference as needed. // if the error was caused by redaction, then return a // tombstone instead of the content. match root_cause.downcast_ref::() { some(datastoreerror::censored(_)) => ok(poll::ready(redacted_content)), none => err(error), } if using the nightly channel, a backtrace is captured and printed with the error if the underlying error type does not already provide its own. in order to see backtraces, they must be enabled through the environment variables described in std::backtrace: if you want panics and errors to both have backtraces, set rust_backtrace= ; if you want only errors to have backtraces, set rust_lib_backtrace= ; if you want only panics to have backtraces, set rust_backtrace= and rust_lib_backtrace= . the tracking issue for this feature is rust-lang/rust# . anyhow works with any error type that has an impl of std::error::error, including ones defined in your crate. we do not bundle a derive(error) macro but you can write the impls yourself or use a standalone macro like thiserror. use thiserror::error; #[derive(error, debug)] pub enum formaterror { #[error("invalid header (expected {expected:?}, got {found:?})")] invalidheader { expected: string, found: string, }, #[error("missing attribute: { }")] missingattribute(string), } one-off error messages can be constructed using the anyhow! macro, which supports string interpolation and produces an anyhow::error. return err(anyhow!("missing attribute: {}", missing)); a bail! macro is provided as a shorthand for the same early return. bail!("missing attribute: {}", missing); no-std support in no_std mode, the same api is almost all available and works the same way. to depend on anyhow in no_std mode, disable our default enabled "std" feature in cargo.toml. a global allocator is required. [dependencies] anyhow = { version = " . ", default-features = false } since the ?-based error conversions would normally rely on the std::error::error trait which is only available through std, no_std mode will require an explicit .map_err(error::msg) when working with a non-anyhow error type inside a function that returns anyhow's error type. comparison to failure the anyhow::error type works something like failure::error, but unlike failure ours is built around the standard library's std::error::error trait rather than a separate trait failure::fail. the standard library has adopted the necessary improvements for this to be possible as part of rfc . comparison to thiserror use anyhow if you don't care what error type your functions return, you just want it to be easy. this is common in application code. use thiserror if you are a library that wants to design your own dedicated error type(s) so that on failures the caller gets exactly the information that you choose. license licensed under either of apache license, version . or mit license at your option. unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the apache- . license, shall be dual licensed as above, without any additional terms or conditions. about flexible concrete error type built on std::error::error resources readme license view license releases . . latest aug , + releases contributors + contributors languages rust . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. how to combat zoom fatigue subscribe sign in clear suggested topics explore hbr diversity latest the magazine ascend most popular podcasts video store webinars newsletters popular topics managing yourself leadership strategy managing teams gender innovation work-life balance all topics for subscribers the big idea visual library reading lists case selections subscribe my account my library topic feeds orders account settings email preferences log out sign in subscribe diversity latest podcasts video the magazine store webinars newsletters all topics the big idea visual library reading lists case selections my library account settings log out sign in your cart your shopping cart is empty. visit our store guest user subscriber my library topic feeds orders account settings email preferences log out reading list reading lists you have free articles left this month. you are reading your last free article for this month. subscribe for unlimited access. create an account to read more. communication how to combat zoom fatigue five research-based tips. by liz fosslien and mollie west duffy by liz fosslien and mollie west duffy april , hbr staff/ slide/getty images tweet post share save get pdf buy copies print summary.    why do we find video calls so draining? in part, it’s because they force us to focus more intently on conversations in order to absorb information. they also require us to stare directly at a screen for minutes at a time without any visual or mental break, which is tiring. to make video calls less exhausting for yourself, try using a few research-based tips. first, avoid multitasking. it may be tempting to get other work done on a video call, but witching between tasks can cost you as much as percent of your productive time. the next time you’re on a video chat, close any tabs or programs that might distract you, put your phone away, and stay present. second, take mini breaks during longer calls by minimizing the video, moving it to behind your open applications, or just looking away from your computer now and then. finally, check your calendar for the next few days to see if there are any conversations you could have over slack or email instead. especially in situations where you’re communicating with people outside of your organization, don’t feel obligated to send a zoom link. often, a phone call is more appropriate. tweet post share save get pdf buy copies print leer en español ler em português in these difficult times, we’ve made a number of our coronavirus articles free for all readers. to get all of hbr’s content delivered to your inbox, sign up for the daily alert newsletter. if you’re finding that you’re more exhausted at the end of your workday than you used to be, you’re not alone. over the past few weeks, mentions of “zoom fatigue” have popped up more and more on social media, and google searches for the same phrase have steadily increased since early march. why do we find video calls so draining? there are a few reasons. in part, it’s because they force us to focus more intently on conversations in order to absorb information. think of it this way: when you’re sitting in a conference room, you can rely on whispered side exchanges to catch you up if you get distracted or answer quick, clarifying questions. during a video call, however, it’s impossible to do this unless you use the private chat feature or awkwardly try to find a moment to unmute and ask a colleague to repeat themselves. further reading coronavirus: leadership and recovery leadership & managing people book . add to cart save share the problem isn’t helped by the fact that video calls make it easier than ever to lose focus. we’ve all done it: decided that, why yes, we absolutely can listen intently, check our email, text a friend, and post a smiley face on slack within the same thirty seconds. except, of course, we don’t end up doing much listening at all when we’re distracted. adding fuel to the fire is many of our work-from-home situations. we’re no longer just dialing into one or two virtual meetings. we’re also continuously finding polite new ways to ask our loved ones not to disturb us, or tuning them out as they army crawl across the floor to grab their headphones off the dining table. for those who don’t have a private space to work, it is especially challenging. finally, “zoom fatigue” stems from how we process information over video. on a video call the only way to show we’re paying attention is to look at the camera. but, in real life, how often do you stand within three feet of a colleague and stare at their face? probably never. this is because having to engage in a “constant gaze” makes us uncomfortable — and tired. in person, we are able to use our peripheral vision to glance out the window or look at others in the room. on a video call, because we are all sitting in different homes, if we turn to look out the window, we worry it might seem like we’re not paying attention. not to mention, most of us are also staring at a small window of ourselves, making us hyper-aware of every wrinkle, expression, and how it might be interpreted. without the visual breaks we need to refocus, our brains grow fatigued. if this all sounds like bad news, don’t despair. we have five research-based tips that can help make video calls less exhausting. avoid multitasking. it’s easy to think that you can use the opportunity to do more in less time, but research shows that trying to do multiple things at once cuts into performance. because you have to turn certain parts of your brain off and on for different types of work, switching between tasks can cost you as much as percent of your productive time. researchers at stanford found that people who multitask can’t remember things as well as their more singularly focused peers. the next time you’re on a video chat, close any tabs or programs that might distract you (e.g. your inbox or slack), put your phone away, and stay present. we know it’s tempting, but try to remind yourself that the slack message you just got can wait minutes, and that you’ll be able to craft a better response when you’re not also on a video chat. build in breaks. take mini breaks from video during longer calls by minimizing the window, moving it to behind your open applications, or just looking away from your computer completely for a few seconds now and then. we’re all more used to being on video now (and to the stressors that come with nonstop facetime). your colleagues probably understand more than you think — it is possible to listen without staring at the screen for a full thirty minutes. this is not an invitation to start doing something else, but to let your eyes rest for a moment. for days when you can’t avoid back-to-back calls, consider making meetings or minutes (instead of the standard half-hour and hour) to give yourself enough time in between to get up and move around for a bit. if you are on an hour-long video call, make it okay for people to turn off their cameras for parts of the call. reduce onscreen stimuli. research shows that when you’re on video, you tend to spend the most time gazing at your own face. this can be easily avoided by hiding yourself from view. still, onscreen distractions go far beyond yourself. you may be surprised to learn that on video, we not only focus on other’s faces, but on their backgrounds as well. if you’re on a call with five people, you may feel like you’re in five different rooms at once. you can see their furniture, plants, and wallpaper. you might even strain to see what books they have on their shelves. the brain has to process all of these visual environmental cues at the same time. to combat mental fatigue, encourage people to use plain backgrounds (e.g. a poster of a peaceful beach scene), or agree as a group to have everyone who is not talking turn off their video. make virtual social events opt-in. after a long day of back-to-back video calls, it’s normal to feel drained, particularly if you’re an introvert. that’s why virtual social sessions should be kept opt-in, meaning whoever owns the event makes it explicit that people are welcome, but not obligated, to join. you might also consider appointing a facilitator if you’re expecting a large group. this person can open by asking a question, and then make it clear in what order people should speak, so everyone gets to hear from one another and the group doesn’t start talking all at once. it’s easy to get overwhelmed if we don’t know what’s expected of us, or if we’re constantly trying to figure out when we should or should not chime in. switch to phone calls or email. check your calendar for the next few days to see if there are any conversations you could have over slack or email instead. if pm rolls around and you’re zoomed-out but have an upcoming one-on-one, ask the person to switch to a phone call or suggest picking up the conversation later so you can both recharge. try something like, “i’d love a break from video calls. do you mind if we do this over the phone?” most likely the other person will be relieved by the switch, too. for external calls, avoid defaulting to video, especially if you don’t know each other well. many people now feel a tendency to treat video as the default for all communication. in situations where you’re communicating with people outside of your organization (clients, vendors, networking, etc.) — conversations for which you used to rely on phone calls — you may feel obligated to send out a zoom link instead. but a video call is fairly intimate and can even feel invasive in some situations. for example, if you’re asked to do a career advice call and you don’t know the person you’re talking to, sticking to phone is often a safer choice. if your client facetimes you with no warning, it’s okay to decline and suggest a call instead. some of these tips might be hard to follow at first (especially that one about resisting the urge to tab-surf during your next zoom call). but taking these steps can help you prevent feeling so exhausted at the thought of another video chat. it’s tiring enough trying to adapt to this new normal. make video calls a little easier for yourself. if our content helps you to contend with coronavirus and other challenges, please consider subscribing to hbr. a subscription purchase is the best way to support the creation of these resources. read more on communication or related topics meetings and stress lf liz fosslien is the head of content at humu, a company that makes it easy for teams to improve, every single week. she has designed and led sessions related to emotions at work for audiences including ted, linkedin, google, viacom, and spotify. liz’s writing and illustrations have been featured by the economist, freakonomics, and npr. liz is coauthor of the book, no hard feelings: the secret power of embracing emotions at work. md mollie west duffy is an organizational development expert and consultant. she was previously an organizational design lead at global innovation firm ideo and a research associate for the dean of harvard business school nitin nohria and renowned strategy professor michael e. porter. she’s written for fast company, quartz, stanford social innovation review, entrepreneur, and other digital outlets. liz and mollie are the authors of the book, no hard feelings: the secret power of embracing emotions at work. follow them on twitter or instagram @lizandmollie. tweet post share save get pdf buy copies print read more on communication or related topics meetings and stress partner center diversity latest magazine ascend topics podcasts video store the big idea visual library case selections subscribe explore hbr the latest most popular all topics magazine archive the big idea reading lists case selections video podcasts webinars visual library my library newsletters hbr press hbr ascend hbr store article reprints books cases collections magazine issues hbr guide series hbr -minute managers hbr emotional intelligence series hbr must reads tools about hbr contact us advertise with us information for booksellers/retailers masthead global editions media inquiries guidelines for authors hbr analytic services copyright permissions manage my account my library topic feeds orders account settings email preferences account faq help center contact customer service follow hbr facebook twitter linkedin instagram your newsreader about us careers privacy policy cookie policy copyright information trademark policy harvard business publishing: higher education corporate learning harvard business review harvard business school copyright ©   harvard business school publishing. all rights reserved. harvard business publishing is an affiliate of harvard business school. none bitcoin plunges as chinese province pulls plug on crypto mining the a.v. club deadspin gizmodo jalopnik jezebel kotaku lifehacker the root the takeout the onion the inventory we come from the future shopsubscribe homelatestreviewstechio earthersciencefield guidewe come from the future homelatestreviewstechio earthersciencefield guide news bitcoin plunges as china's sichuan province pulls plug on crypto mining chinese state media reports % of china's bitcoin mining could be coming offline. bymatt novak / / : am comments ( ) alerts a man holds a placard outside the mana convention center where the cryptocurrency conference bitcoin convention is held, in miami, florida, on june , . photo: marco bello (getty images) bitcoin continued its dramatic plunge to $ , monday morning, down . % from a week earlier as some of china’s largest bitcoin mining farms were shut down over the weekend. the bitcoin mining facilities of sichuan province received an order on friday to stop doing business by sunday, according to chinese state media outlet the global times. advertisement the sichuan provincial development and reform commission and the sichuan energy bureau issued an order to all electricity companies in the region on friday to stop supplying electricity to any known crypto mining organizations, including firms that had already been publicly identified, according to the global times. it seems that some local miners were optimistic that sichuan’s abundant hydroelectric energy would insulate the region from a cryptocurrency crackdown by authorities, but that optimism was obviously misplaced. “we had hoped that sichuan would be an exception during the clampdown as there is an electricity glut there in the rainy season. but chinese regulators are now taking a uniform approach, which would overhaul and rein in the booming bitcoin mining industry in china,” shentu qingchun, ceo of a shenzhen crypto company told the global times. videos on social media sites purported to show miners in sichuan turning off their mining machines and packing up their businesses. g/o media may get a commission galaxy buds (graphite) buy for $ at samsung miners in china are now looking to sell their equipment overseas, and it appears many have already found buyers. cnbc’s eunice yoon tweeted early monday that a chinese logistics firm was shipping , lbs ( , kilograms) of crypto mining equipment to an unnamed buyer in maryland for just $ . per kilogram. advertisement bitcoin hasn’t been the only cryptocurrency to experience a price plunge, with ethereum at $ , , down . % from a week earlier. the meme-currency dogecoin is also down dramatically to $ . early monday morning, plunging . % in the past week. advertisement how much lower will crypto prices go? no one knows for sure, of course. but ponzi schemes can run for a relatively long time before they finally collapse. pinboard’s maciej ceglowski recently appeared on cnbc to explain how the crypto scam works. the only question is whether bitcoin and the other digital monopoly money has finally run its course or whether there are a few more pump and dumps left in this old horse. don’t count the bitcoin diehards out just yet. advertisement technews discussion byarcanum five well, hell. now how will el salvador pay for their new marching band equipment and their monorail? see all comments none zotero zotero collect, organize, cite, and share your research move zotero citations between google docs, word, and libreoffice last year, we added google docs integration to zotero, bringing to google docs the same powerful citation functionality — with support for over , citation styles — that zotero offers in word and libreoffice. today we&# ;re adding a feature that lets you move documents between google docs and word or libreoffice while preserving active zotero citations. [&# ;] retracted item notifications with retraction watch integration zotero can now help you avoid relying on retracted publications in your research by automatically checking your database and documents for works that have been retracted. we&# ;re providing this service in partnership with retraction watch, which maintains the largest database of retractions available, and we&# ;re proud to help sustain their important work. how it works [&# ;] scan books into zotero from your iphone or ipad zotero makes it easy to collect research materials with a single click as you browse the web, but what do you do when you want to add a real, physical book to your zotero library? if you have an iphone or ipad running ios , you can now save a book to zotero just by [&# ;] zotero comes to google docs we&# ;re excited to announce the availability of zotero integration with google docs, joining zotero&# ;s existing support for microsoft word and libreoffice. the same powerful functionality that zotero has long offered for traditional word processors is now available for google docs. you can quickly search for items in your zotero library, add page numbers and other [&# ;] improved pdf retrieval with unpaywall integration as an organization dedicated to developing free and open-source research tools, we care deeply about open access to scholarship. with the latest version of zotero, we&# ;re excited to make it easier than ever to find pdfs for the items in your zotero library. while zotero has always been able to download pdfs automatically as you [&# ;] introducing zoterobib: perfect bibliographies in minutes we think zotero is the best tool for almost anyone doing serious research, but we know that a lot of people — including many students — don’t need all of zotero’s power just to create the occasional bibliography. today, we’re introducing zoterobib, a free service to help people quickly create perfect bibliographies. powered by the same technology [&# ;] zotero . . : new pdf features, faster citing in large documents, and more the latest version of zotero introduces some major improvements for pdf-based workflows, a new citing mode that can greatly speed up the use of the word processor plugin in large documents, and various other improvements and bug fixes. new pdf features improved pdf metadata retrieval while the &# ;save to zotero&# ; button in the zotero connector [&# ;] zotero . and firefox: frequently asked questions in a unified zotero experience, we explained the changes introduced in zotero . that affect zotero for firefox users. see that post for a full explanation of the change, and read on for some additional answers. what&# ;s changing? zotero . is available only as a standalone program, and zotero . for firefox is being replaced [&# ;] new features for chrome and safari connectors we are excited to announce major improvements to the zotero connectors for chrome and safari. chrome the zotero connector for chrome now includes functionality that was previously available only in zotero for firefox. automatic institutional proxy detection many institutions provide a way to access electronic resources while you are off-campus by signing in to a [&# ;] a unified zotero experience since the introduction of zotero standalone in , zotero users have had two versions to choose from: the original firefox extension, zotero for firefox, which provides deep integration into the firefox user interface, and zotero standalone, which runs as a separate program and can be used with any browser. starting with the release of zotero [&# ;] none the unholy three ( film) - wikipedia the unholy three ( film) from wikipedia, the free encyclopedia jump to navigation jump to search film the unholy three ( ) the unholy three, theatrical release poster directed by tod browning written by waldemar young (scenario) based on the unholy three novel by tod robbins produced by tod browning irving thalberg (uncredited) starring lon chaney victor mclaglen cinematography david kesson edited by daniel gray irving thalberg (uncredited) production company metro-goldwyn-mayer distributed by metro-goldwyn-mayer[nb ] release date august  ,   ( - - ) running time minutes country united states language silent with english intertitles the unholy three is a american silent crime melodrama involving a trio of circus conmen, directed by tod browning and starring lon chaney. the supporting cast features mae busch, matt moore, victor mclaglen and harry earles. the unholy three marks the establishment of the notable artistic alliance between director browning and actor chaney that would deliver eight outstanding films to m-g-m studios during the late silent film era.[ ][ ][ ] the film was remade in as a talkie directed by sam conway. chaney and earles repeated their performances as professor echo and tweedledee.[ ] contents plot cast production release and reception themes notes footnotes references external links plot[edit] the unholy three ( film) l to r: victor mclaglen, harry earles, mae busch, lon chaney. three performers leave a sideshow after tweedledee (harry earles), a midget performer, assaults a young heckler and sparks a melee. the three join together in an "unholy" plan to become wealthy. prof. echo, the ventriloquist, assumes the role of mrs. o'grady, a kindly old grandmother, who runs a pet shop, while tweedledee plays her grandchild. hercules (victor mclaglen), the strongman, works in the shop along with the unsuspecting hector mcdonald (matt moore). echo's girlfriend, pickpocket rosie o'grady (mae busch), pretends to be his granddaughter. using what they learn from delivering pets, the trio later commit burglaries, with their wealthy buyers as victims. on christmas eve, john arlington (an uncredited charles wellesley) telephones to complain that the "talking" parrot (aided by echo's ventriloquism) he bought will not speak. when "granny" o'grady visits him to coax the bird into performing, "she" takes along grandson "little willie". while there, they learn that a valuable ruby necklace is in the house. they decide to steal it that night. as echo is too busy, the other two grow impatient and decide to go ahead without him. the next day, echo is furious to read in the newspaper that arlington was killed and his three-year-old daughter badly injured in the robbery. hercules shows no remorse whatsoever, relating how arlington pleaded for his life. when a police investigator shows up at the shop, the trio become fearful and decide to frame hector, hiding the jewelry in his room. meanwhile, hector proposes to rosie. she turns him down, but he overhears her crying after he leaves. to his joy, she confesses she loves him, but was ashamed of her shady past. when the police take him away, rosie tells the trio that she will exonerate him, forcing them to abduct her and flee to a mountain cabin. echo takes along his large pet ape (who terrifies hercules). in the spring, hector is brought to trial. rosie pleads with echo to save hector, promising to stay with him if he does. after echo leaves for the city, tweedledee overhears hercules asking rosie to run away with him (and the loot). tweedledee releases the ape. hercules kills tweedledee right before the ape gets him. at the trial, echo agonizes over what to do, but finally rushes forward and confesses all. both he and hector are set free. when rosie goes to echo to keep her promise, he lies and says he was only kidding. he tells her to go to hector. echo returns to the sideshow, giving his spiel to the customers: "that's all there is to life, friends, ... a little laughter ... a little tear." cast[edit] lon chaney as prof. echo, a.k.a. mrs. o'grady or "granny" mae busch as rosie o'grady matt moore as hector mcdonald victor mclaglen as hercules, a.k.a. "son-in-law" harry earles as tweedledee, a.k.a. baby "little willie" matthew betz as detective regan edward connelly as the judge william humphrey as defense attorney e. alyn warren as prosecuting attorney production[edit] paris cinélux poster for m-g-m's the unholy three. in , universal’s vice-president irving thalberg departed to join metro-goldwyn-mayer studios as production manager. director tod browning followed him to m-g-m after producing a number of unimpressive independent films.[ ][ ] at m-g-m he proposed adapting author tod robbins’ the unholy three and thalberg accommodated browning by purchasing the rights and enlisting lon chaney to play the lead; chaney may have requested that browning direct, having worked with him effectively in on universal’s outside the law starring priscilla dean.[ ] with the unholy three, thalberg, browning and chaney established a highly creative and profitable collaborative trio that produced seven more films at m-g-m, marking the zenith of both browning’s and chaney’s careers.[ ][ ] browning arrived at m-g-m well-versed in the techniques of “trick photography.”[ ] the "ape" that dispatches the strongman hector (victor mclaglen) was actually a three-foot-tall chimpanzee who was made to appear gigantic with camera trickery and perspective shots. when echo removes the ape from his cage, the shot shows echo (with his back turned to the camera) unlocking the cage and walking the ape to the truck. the ape appears to be roughly the same size as echo. this effect was achieved by having harry earles (who played "tweedledee" in the film) play echo for these brief shots, and then cutting to chaney, making it seem as though the ape is gigantic. (in the remake, the ape was played by charles gemora.) [ ] release and reception[edit] play media the unholy three the unholy three enjoyed tremendous success, adding luster to chaney’s reputation as “the man of a thousand faces” and revealing browning’s as a remarkable film stylist.[ ][ ] on august ,  the billboard published a list of five short reviews for the movie. this featured such critics as mordaunt hall (times), george gerhard (evening world), richard watts jr. (herald-tribune), and w.r. (world).[ ] the movie was such a success upon its debut that at its release at the new york capitol theater, it maintained a strong audience attendance for at least two weeks.major edward bowes, who was the managing director at the time, took steps to ensure everyone who didn’t get to see the movie the first week of its viewing would get to by extending the movie's stay. an article written about this event noted the movie as “acclaimed as the best crook drama on the screen and one of the most entertaining motion pictures ever made”, which speaks, along with its apparent popularity, for the movie’s quality.[ ] sherwood of life magazine praised the movie for its photography and providing a more psychological horror film rather than relying on movie effects to scare its audience. noted by sherwood is how the film was shot great attention given to scenes as individual pieces rather than as parts of one greater project, causing continuity errors. this is explained by the writer as an acceptable outcome considering the overall quality. the review is concluded with sherwood declaring the unholy three to be "the best picture of its kind since the miracle man."[ ] the unholy three was released for the first time on dvd by warner bros. digital distribution on october , . the company would later re-release the film as a part of its -disc lon chaney: the warner archive classics collection on november , , and on june , .[ ] on rotten tomatoes, the film holds an approval rating of % based on reviews, with a weighted average rating of . / .[ ] author and film critic leonard maltin gave the film two and a half out of four stars. although maltin noted that the film contained aspects that were less satisfactory, he commended its strong basic idea and chaney's performance.[ ] themes[edit] as is common of a tod browning film, circus life and unusual bodies play a central role in this movie along with great use of trompe-l'œil optical illusion.[ ] trompe-l'œil is exercised and played with as the illusion of dr. echo as "mrs. o’grady" and tweedledee as "little willie". the main plot of the movie revolves around the character’s abilities to pass themselves off convincingly as something they are not, an illusion the movie peels back and reasserts for both the other characters and for the audience themselves. contrary to the usual use of this effect, browning makes it a point to disillusion the audience and display the workings of the illusion to create a different sort of viewing stimulation.[ ] in most browning films, his opinion of the deformed and different becomes evident. the unholy three's plot plays directly with another of browning’s favorite topics, dealing with identity, doubles, dual roles, and deformity. this film is unique in that the character tweedledee is the only one of this group of that is played by a deformed character and is malicious in nature.[ ] notes[edit] ^ loews was the parent company of mgm.[ ] footnotes[edit] ^ gomery, douglas; pafort-overduin, clara ( ). movie history: a survey ( nd ed.). taylor & francis. p.  . isbn  . ^ sobchack, p. : “lon chaney’s influence on browning seems considerable...their significant collaboration began at mgm and with the unholy three.” ^ rosenthal, p. : “the ten films that browning and chaney make together [eight of which made at mgm] were the most successful of either’s career. in these films one can sense a personal rapport between the actor and the director which must have been deeper than a mere professional respect. “ ^ "progressive silent film list: the unholy three". silent era. retrieved march , . ^ eaker, : “chaney died shortly after filming the [ remake]...the only film to feature the actor’s voice...under conway, who had no feel or vision for the eccentric, the remaining cast in the sound remake are sanitized, hack versions of the far more eccentric and genuine cast in the tod browning directed silent film.” ^ sobchack, p. : “browning had been drinking heavily for two years prior to coming to mgm, and, during that time had directed only a few films for independent production companies.” ^ eaker, : “browning had languished for ten years as an assignment director who rarely had a feel for the mostly banal material handed him.” ^ sobchack, p. : sobchack, reports that browning (who produced the film for mgm), convinced thalberg that the story by tod robbins was suitable for adaption, and thalberg purchased the rights. and: the unholy three “the first film at mgm for both browning and chaney.” ^ robinson, p. : “in he was taken on by metro-goldwyn-mayer and began a series of films with lon chaney that must rank among the most extraordinary pictures ever made.” ^ sobchack, p. : “certainly, browning’s best work was for mgm...all his films for universal were made before and his maturity as a filmmaker...some credit must go to irving thalberg who, a vice-president at universal while browning was there” brought browning along “when he [thalberg] left in to join the newly-formed mgm.” ^ sobchack, : p. ^ blyn, p. - : “of all the strange elements in this film, the chimp is perhaps the most bizarre...when he transforms into a gorilla.” ^ eaker, : “the original, silent unholy three ( ) catapulted browning into star director status.” sobshack, p. : “the film was , of course, a huge success.” ^ rosenthal, p. - : “...only in browning's films is [chaney] endowed with substantial human complexity...chaney demonstrated great sensitively to the feelings and drives of the characters that browning devise for him to play.” and p. : browning’s “style”. eaker, : “the unholy three is not, on the surface, as macabre as later browning-chaney films, it has retained its delirious edge well into the st century.” robinson, p. : “in he was taken on by m-g-m and began a series of films with lon chaney that must rank among the most extraordinary pictures ever made.” ^ "as the n. y. reviewers see the films: "the unholy three"." august , . proquest. the billboard. retrieved october , . ^ ""the unholy three" to remain at the capitol." august , . proquest. the new york herald, the new york tribune. retrieved october , . ^ sherwood, r e. "the silent drama." august ( - ). proquest. web. retrieved october , . ^ "the unholy three ( ) - tod browning". allmovie.com. allmovie. retrieved march , . ^ "the unholy three ( ) - rotten tomatoes". rotten tomatoes.com. flixer. retrieved march , . ^ leonard maltin; spencer green; rob edelman (january ). leonard maltin's classic movie guide. plume. p.  . isbn  - - - - . ^ a b thomas, randal kerry. "symbolic structure of the circus in the films of tod browning." june . proquest. retrieved october , . ^ rosenthal, p. ; “the ability to assume control of another being is vital to the unholy three. echo the ventriloquist delivers testimony in court through the mouth of hector....as the words pour from the witness stand, browning repeatedly dissolves echo onto hector and vise-versa, establishing the performers complete responsibility for what is being said.” manon, hugh s. "seeing through seeing through: the "trompe l'oeil" effect and bodily difference in the cinema of tod browning." . proquest. retrieved october , . references[edit] blyn, robin. . between silence and sound: ventriloquism and the advent of the voice in the unholy three in the films of tod browning, in the films of tod browning, ed. bernd herzogenrath, black dog publishing. london. pp. - isbn  - - -x eaker, alfred. . tod browning retrospective https://alfredeaker.com/ / / /todd-browning-director-retrospective/ retrieved february, . herzogenrath, bernd. . the films of tod browning. black dog publishing. london. isbn  - - -x sobchack, vivian. . the films of tod browning: an overview long past in the films of tod browning in the films of tod browning, editor bernd herzogenrath, black dog publishing. london. pp. - . isbn  - - -x rosenthal, stuart. . tod browning: the hollywood professionals, volume . the tantivy press. isbn  - - -x external links[edit] wikiquote has quotations related to: the unholy three ( film) the unholy three at imdb the unholy three at the tcm movie database the unholy three at allmovie the unholy three at the american film institute catalog the unholy three at rotten tomatoes v t e films directed by tod browning s the lucky transfer ( ) the slave girl ( ) an image of the past ( ) the highbinders ( ) the story of a story ( ) the spell of the poppy ( ) the electric alarm ( ) the living death ( ) the burned hand ( ) the woman from warren's ( ) little marie ( ) the fatal glass of beer ( ) everybody's doing it ( ) puppets ( ) jim bludso ( ) a love sublime ( ) hands up! ( ) peggy, the will o' the wisp ( ) the jury of fate ( ) the legion of death ( ) the eyes of mystery ( ) revenge ( ) which woman? ( ) the deciding kiss ( ) the brazen beauty ( ) set free ( ) the wicked darling ( ) the exquisite thief ( ) the unpainted woman ( ) the petal on the current ( ) bonnie, bonnie lassie ( ) s the virgin of stamboul ( ) outside the law ( ) no woman knows ( ) the wise kid ( ) man under cover ( ) under two flags ( ) drifting ( ) the day of faith ( ) white tiger ( ) the dangerous flirt ( ) silk stocking sal ( ) the unholy three ( ) the mystic ( ) dollar down ( ) the blackbird ( ) the road to mandalay ( ) the show ( ) the unknown ( ) london after midnight ( ) the big city ( ) west of zanzibar ( ) where east is east ( ) the thirteenth chair ( ) s outside the law ( ) dracula ( ) iron man ( ) freaks ( ) fast workers ( ) mark of the vampire ( ) the devil-doll ( ) miracles for sale ( ) v t e irving thalberg producer the hunchback of notre dame ( ) merry-go-round ( ) his hour ( ) he who gets slapped ( ) the unholy three ( ) the merry widow ( ) the tower of lies ( ) the big parade ( ) ben-hur: a tale of the christ ( ) torrent ( ) la bohème ( ) brown of harvard ( ) the road to mandalay ( ) the temptress ( ) valencia ( ) flesh and the devil ( ) twelve miles out ( ) the student prince in old heidelberg ( ) london after midnight ( ) the crowd ( ) laugh, clown, laugh ( ) white shadows in the south seas ( ) show people ( ) west of zanzibar ( ) the broadway melody ( ) the trial of mary dugan ( ) voice of the city ( ) where east is east ( ) the last of mrs. cheyney ( ) the hollywood revue of ( ) hallelujah ( ) his glorious night ( ) the kiss ( ) anna christie ( ) redemption ( ) the divorcee ( ) the rogue song ( ) the big house ( ) the unholy three ( ) let us be gay ( ) billy the kid ( ) way for a sailor ( ) a lady's morals ( ) inspiration ( ) trader horn ( ) the secret six ( ) a free soul ( ) just a gigolo ( ) the sin of madelon claudet ( ) the guardsman ( ) the champ ( ) possessed ( ) private lives ( ) mata hari ( ) freaks ( ) tarzan the ape man ( ) grand hotel ( ) letty lynton ( ) as you desire me ( ) red-headed woman ( ) smilin' through ( ) red dust ( ) rasputin and the empress ( ) strange interlude ( ) tugboat annie ( ) bombshell ( ) eskimo ( ) riptide ( ) the barretts of wimpole street ( ) the merry widow ( ) what every woman knows ( ) no more ladies ( ) china seas ( ) mutiny on the bounty ( ) a night at the opera ( ) riffraff ( ) romeo and juliet ( ) camille ( ) maytime ( ) a day at the races ( ) broadway melody of ( ) the good earth ( ) marie antoinette ( ) writer the trap ( ) people norma shearer (wife) irving thalberg jr. (son) related the love of the last tycoon authority control general viaf worldcat (via viaf) national libraries france (data) united states poland other sudoc (france) retrieved from "https://en.wikipedia.org/w/index.php?title=the_unholy_three_( _film)&oldid= " categories: films crime drama films american films american crime drama films american silent feature films american black-and-white films cross-dressing in american films films based on american novels films directed by tod browning films produced by irving thalberg metro-goldwyn-mayer films hidden categories: articles with short description short description is different from wikidata use mdy dates from april template film date with release date wikipedia articles with viaf identifiers wikipedia articles with bnf identifiers wikipedia articles with lccn identifiers wikipedia articles with plwabn identifiers wikipedia articles with sudoc identifiers wikipedia articles with worldcat-viaf identifiers navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version in other projects wikiquote languages català deutsch español français bahasa indonesia italiano nederlands português română Русский srpskohrvatski / српскохрватски türkçe Українська edit links this page was last edited on may , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement github - roidrage/lograge: an attempt to tame rails' default policy to log everything. skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} roidrage / lograge notifications star k fork an attempt to tame rails' default policy to log everything. www.paperplanes.de/ / / /on-notifications-logsubscribers-and-bringing-sanity-to-rails-logging.html mit license k stars forks star notifications code issues pull requests actions projects wiki security insights more code issues pull requests actions projects wiki security insights master switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branches tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit benlovell release . . … eab jun , release . . eab git stats commits files permalink failed to load latest commit information. type name latest commit message commit time gemfiles ci: add rails . .x to build matrix jun , lib release . . jun , spec add support for action cable (# ) apr , tools fixes console for local development dec , .gitignore test with ap . and . on travis aug , .rspec centralized require spec_helper in .rspec may , .rubocop.yml bump rubocop and fix new violations dec , .travis.yml ci: jruby- . . . (# ) apr , changelog.md release . . jun , contributing.md add contribution guide feb , gemfile fix bundler warning on install jun , license.txt update license year may , readme.md add action cable section to readme (# ) may , rakefile make rake ci with rspec and rubocop pass sep , lograge.gemspec add license.txt to allow for easier gem auditing (# ) jan , view code lograge - taming rails' default request logging installation internals action cable what it doesn't do faq logging errors / exceptions handle actioncontroller::routingerror contributing license readme.md lograge - taming rails' default request logging lograge is an attempt to bring sanity to rails' noisy and unusable, unparsable and, in the context of running multiple processes and servers, unreadable default logging output. rails' default approach to log everything is great during development, it's terrible when running it in production. it pretty much renders rails logs useless to me. lograge is a work in progress. i appreciate constructive feedback and criticism. my main goal is to improve rails' logging and to show people that they don't need to stick with its defaults anymore if they don't want to. instead of trying solving the problem of having multiple lines per request by switching rails' logger for something that outputs syslog lines or adds a request token, lograge replaces rails' request logging entirely, reducing the output per request to a single line with all the important information, removing all that clutter rails likes to include and that gets mingled up so nicely when multiple processes dump their output into a single file. instead of having an unparsable amount of logging output like this: started get "/" for . . . at - - : : + processing by homecontroller#index as html rendered text template within layouts/application ( . ms) rendered layouts/_assets.html.erb ( . ms) rendered layouts/_top.html.erb ( . ms) rendered layouts/_about.html.erb ( . ms) rendered layouts/_google_analytics.html.erb ( . ms) completed ok in ms (views: . ms | activerecord: . ms) you get a single line with all the important information, like this: method=get path=/jobs/ .json format=json controller=jobscontroller action=show status= duration= . view= . db= . the second line is easy to grasp with a single glance and still includes all the relevant information as simple key-value pairs. the syntax is heavily inspired by the log output of the heroku router. it doesn't include any timestamp by default, instead it assumes you use a proper log formatter instead. installation in your gemfile gem "lograge" enable it in an initializer or the relevant environment config: # config/initializers/lograge.rb # or # config/environments/production.rb rails.application.configure do config.lograge.enabled = true end if you're using rails 's api-only mode and inherit from actioncontroller::api, you must define it as the controller base class which lograge will patch: # config/initializers/lograge.rb rails.application.configure do config.lograge.base_controller_class = 'actioncontroller::api' end if you use multiple base controller classes in your application, specify an array: # config/initializers/lograge.rb rails.application.configure do config.lograge.base_controller_class = ['actioncontroller::api', 'actioncontroller::base'] end you can also add a hook for own custom data # config/environments/staging.rb rails.application.configure do config.lograge.enabled = true # custom_options can be a lambda or hash # if it's a lambda then it must return a hash config.lograge.custom_options = lambda do |event| # capture some specific timing values you are interested in {:name => "value", :timing => some_float.round( ), :host => event.payload[:host]} end end or you can add a timestamp: rails.application.configure do config.lograge.enabled = true # add time to lograge config.lograge.custom_options = lambda do |event| { time: time.now } end end you can also keep the original (and verbose) rails logger by following this configuration: rails.application.configure do config.lograge.keep_original_rails_log = true config.lograge.logger = activesupport::logger.new "#{rails.root}/log/lograge_#{rails.env}.log" end you can then add custom variables to the event to be used in custom_options (available via the event.payload hash, which has to be processed in custom_options method to be included in log output, see above): # app/controllers/application_controller.rb class applicationcontroller < actioncontroller::base def append_info_to_payload(payload) super payload[:host] = request.host end end alternatively, you can add a hook for accessing controller methods directly (e.g. request and current_user). this hash is merged into the log data automatically. rails.application.configure do config.lograge.enabled = true config.lograge.custom_payload do |controller| { host: controller.request.host, user_id: controller.current_user.try(:id) } end end to further clean up your logging, you can also tell lograge to skip log messages meeting given criteria. you can skip log messages generated from certain controller actions, or you can write a custom handler to skip messages based on data in the log event: # config/environments/production.rb rails.application.configure do config.lograge.enabled = true config.lograge.ignore_actions = ['homecontroller#index', 'acontroller#an_action'] config.lograge.ignore_custom = lambda do |event| # return true here if you want to ignore based on the event end end lograge supports multiple output formats. the most common is the default lograge key-value format described above. alternatively, you can also generate json logs in the json_event format used by logstash. # config/environments/production.rb rails.application.configure do config.lograge.formatter = lograge::formatters::logstash.new end note: when using the logstash output, you need to add the additional gem logstash-event. you can simply add it to your gemfile like this gem "logstash-event" done. the available formatters are: lograge::formatters::lines.new lograge::formatters::cee.new lograge::formatters::graylog .new lograge::formatters::keyvalue.new # default lograge format lograge::formatters::json.new lograge::formatters::logstash.new lograge::formatters::ltsv.new lograge::formatters::raw.new # returns a ruby hash object in addition to the formatters, you can manipulate the data yourself by passing an object which responds to #call: # config/environments/production.rb rails.application.configure do config.lograge.formatter = ->(data) { "called #{data[:controller]}" } # data is a ruby hash end internals thanks to the notification system that was introduced in rails , replacing the logging is easy. lograge unhooks all subscriptions from actioncontroller::logsubscriber and actionview::logsubscriber, and hooks in its own log subscription, but only listening for two events: process_action and redirect_to (in case of standard controller logs). it makes sure that only subscriptions from those two classes are removed. if you happened to hook in your own, they'll be safe. unfortunately, when a redirect is triggered by your application's code, actioncontroller fires two events. one for the redirect itself, and another one when the request is finished. unfortunately the final event doesn't include the redirect, so lograge stores the redirect url as a thread-local attribute and refers to it in process_action. the event itself contains most of the relevant information to build up the log line, including view processing and database access times. while the logsubscribers encapsulate most logging pretty nicely, there are still two lines that show up no matter what. the first line that's output for every rails request, you know, this one: started get "/" for . . . at - - : : + and the verbose output coming from rack-cache: cache: [get /] miss both are independent of the logsubscribers, and both need to be shut up using different means. for the first one, the starting line of every rails request log, lograge replaces code in rails::rack::logger to remove that particular log line. it's not great, but it's just another unnecessary output and would still clutter the log files. maybe a future version of rails will make this log line an event as well. to remove rack-cache's output (which is only enabled if caching in rails is enabled), lograge disables verbosity for rack-cache, which is unfortunately enabled by default. there, a single line per request. beautiful. action cable starting with version . . , lograge introduced support for action cable logs. this proved to be a particular challenge since the framework code is littered with multiple (and seemingly random) logger calls in a number of internal classes. in order to deal with it, the default action cable logger was silenced. as a consequence, calling logger e.g. in user-defined connection or channel classes has no effect - rails.logger (or any other logger instance) has to be used instead. additionally, while standard controller logs rely on process_action and redirect_to instrumentations only, action cable messages are generated from multiple events: perform_action, subscribe, unsubscribe, connect, and disconnect. perform_action is the only one included in the actual action cable code and others have been added by monkey patching actioncable::channel::base and actioncable::connection::base classes. what it doesn't do lograge is opinionated, very opinionated. if the stuff below doesn't suit your needs, it may not be for you. lograge removes actionview logging, which also includes rendering times for partials. if you're into those, lograge is probably not for you. in my honest opinion, those rendering times don't belong in the log file, they should be collected in a system like new relic, librato metrics or some other metrics service that allows graphing rendering percentiles. i assume this for everything that represents a moving target. that kind of data is better off being visualized in graphs than dumped (and ignored) in a log file. lograge doesn't yet log the request parameters. this is something i'm actively contemplating, mainly because i want to find a good way to include them, a way that fits in with the general spirit of the log output generated by lograge. however, the payload does already contain the params hash, so you can easily add it in manually using custom_options: # production.rb yourapp::application.configure do config.lograge.enabled = true config.lograge.custom_options = lambda do |event| exceptions = %w(controller action format id) { params: event.payload[:params].except(*exceptions) } end end faq logging errors / exceptions our first recommendation is that you use exception tracking services built for purpose ;) if you absolutely must log exceptions in the single-line format, you can do something similar to this example: # config/environments/production.rb yourapp::application.configure do config.lograge.enabled = true config.lograge.custom_options = lambda do |event| { exception: event.payload[:exception], # ["exceptionclass", "the message"] exception_object: event.payload[:exception_object] # the exception instance } end end the :exception is just the basic class and message whereas the :exception_object is the actual exception instance. you can use both / either. be mindful when including this, you will probably want to cherry-pick particular attributes and almost definitely want to join the backtrace into something without newline characters. handle actioncontroller::routingerror add a get '*unmatched_route', to: 'application#route_not_found' rule to the end of your routes.rb then add a new controller action in your application_controller.rb. def route_not_found render 'error_pages/ ', status: :not_found end # contributing see the contributing.md file for further information. license mit. code extracted from travis ci. (c) mathias meyer see license.txt for details. about an attempt to tame rails' default policy to log everything. www.paperplanes.de/ / / /on-notifications-logsubscribers-and-bringing-sanity-to-rails-logging.html resources readme license mit license releases tags packages no packages published contributors + contributors languages ruby . % roff . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. mxadm: a small cli matrix room admin tool erambler home about series tags talks rdm resources mxadm: a small cli matrix room admin tool date: - - tags: [rust] [matrix] [open source] [communication] i’ve enjoyed learning rust (the programming language) recently, but having only really used it for solving programming puzzles i’ve been looking for an excuse to use it for something more practical. at the same time, i’ve been using and learning about matrix (the chat/messaging platform), and running some small rooms there i’ve been a bit frustrated that some pretty common admin things don’t have a good user interface in any of the available clients. so… i decided to write a little command-line tool to do a few simple tasks, and it’s now released as mxadm! it’s on crates.io, so if you have rust and cargo available, installing it is as simple as running cargo install mxadm. i’ve only taught it to do a few things so far: list your joined rooms add/delete a room alias tombstone a room (i.e. redirect it to a new room) i’ll add more as i need them, and i’m open to suggestions too. it uses matrix-rust-sdk, the matrix client-server sdk for rust, which is built on the lower-level ruma library, along with anyhow for error handling. the kind folks in the #matrix-rust-sdk:matrix.org have been particularly kind in helping me get started using it. more details from: source code on tildegit mxadm on crates.io mxadm on lib.rs suggestions, code reviews, pull requests all welcome, though it will probably take me a while to act on them. enjoy! webmentions you can respond to this post, "mxadm: a small cli matrix room admin tool", by: liking, boosting or replying to a tweet or toot that mentions it; or sending a webmention from your own site to https://erambler.co.uk/blog/introducing-mxadm/ comments & reactions haven't loaded yet. you might have javascript disabled but that's cool 😎. comments powered by cactus comments 🌵 me elsewhere :: keyoxide | keybase | mastodon | matrix | twitter | github | gitlab | orcid | pypi | linkedin © jez cope | built by: hugo | theme: mnemosyne build status: except where noted, this work is licensed under a creative commons attribution . international license. email archives: building capacity and community – "email archives: building capacity and community” is a four-year program seeking to build email archiving capacity in archives, libraries, and museums. the program will fund projects of $ , to no more than $ , dollars. the goal of this project is to build a broad community of institutions that can preserve email as part of their research collections. we invite you to be a part of this important initiative by submitting a proposal! skip to content ☰menu about this opportunity overview funding ideas faqs more information contact us submit a proposal instructions timeline proposal evaluation news & updates email archives task force email archives: building capacity and community "email archives: building capacity and community” is a four-year program seeking to build email archiving capacity in archives, libraries, and museums. the program will fund projects of $ , to no more than $ , dollars. the goal of this project is to build a broad community of institutions that can preserve email as part of their research collections. we invite you to be a part of this important initiative by submitting a proposal! grant summary proposal submission what are email archives? funding ideas latest news email archives task force submit your proposal privacy policy © university of illinois board of trustees a short history of craap | hapgood hapgood mike caulfield's latest web incarnation. networked learning, open education, and online digital literacy menu skip to content home about publications reviews talks search search for: a short history of craap september , december , / mikecaulfield update: i recently learned that this post has been selected for inclusion in a prestigious acrl yearly list. newcomers unfamiliar with our work may want to check out sift, our alternative to craap, after reading the article. i reference the history of the so-called “checklist approaches” to online information literacy from time to time, but haven’t put the history down in any one place that’s easily linkable. so if you were waiting for a linkable history of craap and radcab, complete with supporting links, pop open the champagne (portland people, feel free to pop open your $ bottle of barrel-aged beer). today’s your lucky day. background in both undergraduate education and k- the most popular approach to online media literacy of the past years has been the acronym-based “checklist”. prominent examples include radcab and craap, both in use since the mid- s. the way that these approaches work is simple: students are asked to chose a text, and then reflect on it using the acronym/initialism as prompt. as an example, a student may come across an interactive fact-check of the claim that reporters in russia were fired over a story they did that was critical of the russian government. it makes claims that a prominent critic of the kremlin, julia ioffe, has made grave errors in her reporting of a particular story on russian journalists, and goes further to detail what they claim is a pattern of controversy: we can use the following craap prompts to analyze the story. craap actually asks the students to ponder and answer separate questions before they can label a piece of content “good”, but we’ll spare you the pain of that and abbreviate here: currency: is the article current? is it up-to-date? yes, in this case it is! it came out a couple of days ago! relevance: is the article relevant to my need for information? it’s very relevant. this subject is in the news, and the question of whether russia is this authoritarian state that so many people claim it is vital to understanding what our policies should be toward russia, and to what it might mean to want to emulate russia in domestic policy toward journalists. accuracy: is the article accurate? are there spelling errors, basic mistakes? nope, it’s well written, and very slickly presented, in a multimedia format. authority: does it cite sources? extensively. it quotes the reporters, it references the articles it is rebutting. purpose: what is the purpose? it’s a fact-check, so the purpose is to check facts, which is good. having read the whole thing once and read it again thinking about these questions, maybe we find something to get uneasy about, minutes later. maybe. but none of these questions get to the real issue, which is that this fact check is written by fakecheck, the fact-checking unit of rt (formerly russia today), a news outfit believed by experts to be a kremlin-run “propaganda machine”. once you know that, the rest of this is beside the point, a waste of student time. you are possibly reading a kremlin written attack on a kremlin critic. time to find another source. we brought a can opener to a gunfight having gone through this exercise, it probably won’t shock you that the checklist approach was not designed for the social web. in fact, it was not designed for the web at all.  the checklist approach was designed – initially – for a single purpose: selecting library resources on a limited budget. that’s why you’ll see terms like “coverage” in early checklist approaches — what gets the biggest bang for the taxpayer buck?  these criteria have a long history of slow evolution, but as an example of how they looked years ago, here’s a bulletin from the medical library association in . first it states the goal: in december , chin held a series of meetings of health care professionals for the purpose of examining how these providers assess health information in print and other formats. we hoped to extract from these discussions some principles and techniques which could be applied by the librarians of the network to the selection of health materials. and what criteria did they use? during these meetings eight major categories of selection criteria for printed materials were considered: accuracy, currency, point of view, audience level, scope of coverage, organization, style, and format. if you read this article’s expansions on those categories, you’ll see the striking similarities to what we teach students today, as a technique not to decide on how best to build a library collection, but for sorting through social media and web results. again, i’ll repeat: the criteria here are from , and other more limited versions pre-dated that conference significantly. when the web came along, librarians were faced with another collections challenge: if they were going to curate “web collections” what criteria should they use? the answer was to apply the old criteria. this announcement from information superhighway library cyberstacks was typical: although we recognize that the net offers a variety of resources of potential value to many clientele and communities for a variety of uses, we do not believe that one should suspend critical judgment in evaluating quality or significance of sources available from this new medium. in considering the general principles which would guide the selection of world wide web (www) and other internet resources for cyberstacks(sm), we decided to adopt the same philosophy and general criteria used by libraries in the selection of non-internet reference resources (american library association. reference collection development and evaluation committee ). these principles, noted below, offered an operational framework in which resources would be considered as candidate titles for the collection among the criteria mentioned? authority accuracy recency community needs (relevance) uniqueness/coverage look familiar? it wasn’t just cyberstacks of course. to most librarians it was just obvious that whether it was on the web or in the stacks the same methods would apply. so when the web came into being, library staff, tasked with teaching students web literacy, began to teach students how to use collection development criteria they had learned in library science programs. the first example of this i know of is tate & alexander’s paper which outlines a lesson plan using the “traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage.”  (an image from a circa slideshow from marsha tate and jan alexander on how to teach students to apply library collection development criteria to the web) it’s worth noting that even in the mid s, research showed the checklist approach did not work as a teaching tool. in her research on student evaluation of web resources, ann scholz‐crane observed how students used the following criteria to evaluate two web sites (both with major flaws as sources): she gave the students the two websites and asked them to evaluate them (one student group with the criteria and one without). she was clear to the students that they had the entire web at their disposal to answer the questions. the results…were not so good. students failed to gather even the most basic information about the larger organization producing one of the sites. in fact, only of students even noted a press release on an organization’s website was produced by the organization, which should be considered as an author. this oversight was all the more concerning as the press release outlined research the organization had done. the students? they saw the relevant author as the contact person listed at the bottom of the press release. that was what was on the page, after all. (if this sounds similar to the fakecheck problem above — oh heck, i don’t even have snark left in me anymore. yeah. it’s the same issue, in .) what was going on? in noting a major difference in how the expert evaluators went about the site versus the way the students did, scholz‐crane notes: no instances were found where it could be determined that the students went outside the original document to locate identifying information. for example, the information about the author of site a that appeared on the document itself was short phrase listing the author as a regular contributor to the magazine… however a link from the document leads the reader to a fuller description of the author’s qualifications and a caution to remember that the author is not a professional and serves only as a friend/mentor. none of the students mentioned any of the information contained in the fuller description as part of the author’s qualifications. this is in stark contrast to the essay evaluations of the expert evaluators where all four librarians consulted sources within the document’s larger web site and sources found elsewhere on the web. worse, although the checklist was meant to provide a holistic view of the document, most students in practice focused their attention on a single criterion, although what that criterion was varied from student to student. the supposed holistic evaluation was not holistic at all. finally, the use of the control group showed that the students without the criteria were already using the same criteria in their responses: far from being a new way of looking at documents it was in fact a set of questions students were already asking themselves about documents, to little effect. you know how this ends. the fact that the checklist didn’t work didn’t slow its growth. in fact, adoption accelerated. in , sarah blakeslee at california state university noted the approach was already pervasive, even if the five terms most had settled on were not memorable: last spring while developing a workshop to train first-year experience instructors in teaching information literacy, i tried to remember off the top of my head the criteria for evaluating information resources. we all know the criteria i’m referring to. we’ve taught them a hundred times and have stumbled across them a million more. maybe we’ve read them in our own library’s carefully crafted evaluation handout or found one of the , web documents that appear in . seconds when we type “evaluating information” into the google search box (search performed at : on / / ). blakeslee saw the lack of memorability of the prompts as a stumbling block: did i trust them to hold up a piece of information, to ponder, to wonder, to question, and to remember or seek the criteria they had learned for evaluating their source that would instantly generate the twenty-seven questions they needed to ask before accepting the information in front of them as “good”? honestly, no, i didn’t. so what could i do to make this information float to the tops of their heads when needed? after trying some variations in order of accuracy, authority, objectivity, currency, and coverage (“my first efforts were less than impressive. aaocc? ccoaa? coaca?”), a little selective use of synonyms produced the final arrangement, in a handout that quickly made its way around the english-speaking world. but the criteria were essentially the same as they were in , as was the process: and so we taught this and its variations for almost twenty years even though it did not work, and most librarians i’ve talked to realized it didn’t work many years back but didn’t know what else to do. so let’s keep that in mind as we consider what to do in the future: contrary to public belief we did teach students online information literacy. it’s just that we taught them methodologies that were developed to decide whether to purchase reference sets for libraries.  it did not work out well. share this: twitter facebook like this: like loading... uncategorized post navigation ← the fast and frugal needs of the online reader gifs as disinformation remedy (aka the gif curriculum) → thoughts on “a short history of craap” scott robison september , at : am “cacoa” reply mikecaulfield september , at : pm oh my gosh. portlandia invented craap. reply blgriffin september , at : am whatever the history of evaluation, the checklist elements are still relevant so it makes sense that it has not changed, nor is it some travesty of teaching or necessarily the reason people don’t get it right. experienced people do it (internal checklists) intuitively, as well as look outside of a source (or website);”lateral” searching is no newer than the other elements like currency, and thinking critically about other long-standing elements of a source; it just adds another point to the checklist. the issue isn’t how awful checklists are, it’s that we’ve become lazy and don’t think about what we are seeing critically, we expect instant gratification with the internet, and there is a whole heck of a lot more crap out there now then there would have fit in an old-fashioned, curated print library. reply mikecaulfield september , at : am the issue is that the volume of decisions we must make on the web combined with increased uncertainty around sources do require different approaches than longer assessments under information scarcity. when teaching does not start by considering the actual environment in which skills will be practiced and knowledge applied, i would argue it *is* a travesty. reply blgriffin september , at : am of course one has to cater learning/teaching to the environment that it will be used in, and as importantly, to the developmental level of the learner. i’m merely arguing that checklists, if adapted to the web, which anyone “teaching” this would do, can be a helpful starting point. they are a tool, like so many others, and the fact that they evolved from print standards does not make it a travesty to apply updated versions of them now. most of the bullet points (currency, authority, etc.) remain relevant and worthy of discussion and probably more importantly, practice. the bottom line is, there is no magic bullet, and that it takes time to truly verify web sources, whether one begins with a checklist or something else. in the old days one could pretty much rely on what the librarian had at hand with less critical analysis – now we are pretty much on our own, and many folks will not take the time or do not realize that they have to to get “good” info on the web. mikecaulfield september , at : am blgriffin — i’m thankful for your input here, and don’t want to belabor the point much – i know you have some expertise in this area. but i have taught with a checklist approach (way back, for both web literacy and statistical literacy) and i have taught with a more heuristic-driven approach, and while it’s not a magic bullet, there has been a world of difference in effect. one of the more interesting elements of the checklist is noted in the article and also something that gigerenzer has noted — when students are presented with many things to evaluate they tend to quickly zero in on whatever property is easiest to assess, and simply apply that. (the study does not say this specifically, but hints at it with the description). by trade i’m a faculty developer and instructional design — i teach faculty how to teach and build courses. and i used to start workshops with a big slide “students economize”, which is to say that students will take the simplest possible lesson from what you teach, and that because of this what a method teaches and what a student learns from it are often opposite things. i have many problems with the checklist, but my biggest is how i see it get “economized” by students in practice, because they are overwhelmed by it, even when taught carefully. we do teach some elements of the checklist later in courses, but: * not as a checklist * always bound to a domain (e.g. currency in news vs. currency in research, etc — something the earliest checklist approaches did that was lost) * only after students have developed the habits and heuristics of quick sorting and want to learn more about the harder calls reply blgriffin september , at : am thank you for your more detailed elaboration/reiteration of why you’ve found checklists to be self-defeating, i.e. that students over-simplify (economize, maybe by necessity?), which i see over and over again as well. and i would by no means put all faith in teaching via checklist nor object to calling them something else. when i teach this i try to get students (hs freshmen and sophomores) to come up with their own criteria (checklist?) based on what we are looking at: news, social media or scholarly (this is very tough for them to get at this level and i have a separate module or two for it) sources. i’d be curious to know if this quick-sorting is really a more effective way to get them to be more critical information users and to better sort the wheat from the chaff. it sounds like it has been for your students and teachers. perhaps i’ll try it in the future (and be sure to give you credit!). thanks again! reply fullertones october , at : am as a high school librarian, i’ve been using the craap acronym for a while to get studemts thinking broadly about the quality of sources. i never was satisfied, however, with the approach some teacher-librarians use of having stusents assign points based on a craap rubric. when i first read the four moves a couple years ago, i initially thought it would be too difficult for students to remember and apply, but i’m trying. along with encouraging use of fact-checking sites, i’ve been getting students to open another tab to check authority by reading laterally–searching to see what others say about the source. the practices i encourage students to use are still evolving, and i would love to learn more about what others are doing successfully. thank you for this discussion. reply pingback: hack education weekly news - the vape depot pingback: rethinking the c.r.a.p. test | engaging our digital natives pingback: network heuristics | hapgood pingback: education evolutions # | haas | learning pingback: selected resources: a short history of craap – studying research pingback: it’s time to cut the craap – librarian of things pingback: op-ed: colleges are teaching old lessons on misinformation – bodensee international pingback: op-ed: why can't a generation that grew up online spot the misinformation in front of them? - country highlights pingback: sam wineburg and nadav ziv: we must teach students to recognize misinformation | diane ravitch's blog leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (required) (address never made public) name (required) website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. rss. rss - posts me, on the web @holden on twitter caulfield.mike@gmail.com search this blog search for: blogroll alan levine amy colllier audrey watters dan meyer david wiley frances bell jen ross jim groom jon udell maha bali martin weller michael feldstein (and others) nicole caulfield stephen downes tom hoffman ward cunningham archives june january january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march february january december november october september august july june may april march january november october september august july june may april february january december november october september august july june may april march february january december november october september august july june may april january december november october september august july june may march blog at wordpress.com.   loading comments...   write a comment... email (required) name (required) website %d bloggers like this: none none everybody's libraries everybody's libraries libraries for everyone, by everyone, shared with everyone, about everything public domain day : honoring a lost generation it&# ;s public domain day again. in much of europe, and other countries with &# ;life+ years&# ; copyright terms, works by authors who died in , such as george orwell, karin michaelis, george bernard shaw, and edna st. vincent millay, have joined &# ; continue reading &# ; counting down to in the public domain we&# ;re rapidly approaching another public domain day, the day at the start of the year when a year&# ;s worth of creative work joins the public domain. this will be the third year in a row that the us will have &# ; continue reading &# ; from our subjects to yours (and vice versa) (tl;dr: i&# ;m starting to implement services and publish data to support searching across library collections that use customized subject headings, such as the increasingly-adopted substitutes for lcsh terms like &# ;illegal aliens&# ;. read on for what i&# ;m doing, why, and where &# ; continue reading &# ; everybody&# ;s library questions: finding films in the public domain welcome to another installment of everybody&# ;s library questions, where i give answers to questions people ask me (in comments or email) that seem to be useful for general consumption. before i start, though, i want to put in a plug &# ; continue reading &# ; build a better registry: my intended comments to the library of congress on the next register of copyrights the library of congress is seeking public input on abilities and priorities desired for the next register of copyrights, who heads the copyright office, a department within the library of congress.  the deadline for comments as i write this is &# ; continue reading &# ; welcome to everybody&# ;s online libraries as coronavirus infections spread throughout the world, lots of people are staying home to slow down the spread and save lives.  in the us, many universities, schools, and libraries have closed their doors.  (here&# ;s what happening at the library where &# ; continue reading &# ; public domain day : coming around again i&# ;m very happy for to be arriving.  as the start of the s, it represents a new decade in which we can have a fresh start, and hope to make better decisions and have better outcomes than some of &# ; continue reading &# ; vision # : rhapsody in blue by george gershwin it&# ;s only a few hours from the new year where i write this, but before i ring in the new year, and a new year&# ;s worth of public domain material, i&# ;d like to put in a request for what music &# ; continue reading &# ; vision # : ding dong merrily on high by george ratcliffe woodward and others it&# ;s beginning to sound a lot like christmas everywhere i go.  the library where i work had its holiday party earlier this week, where i joined librarian colleagues singing christmas, hanukkah, and winter-themed songs in a pick-up chorus.  radio stations &# ; continue reading &# ; vision # : the most dangerous game by richard connell &# ;be a realist. the world is made up of two classes&# ;the hunters and the huntees. luckily, you and i are hunters.&# ; sanger rainsford speaks these words at the start of &# ;the most dangerous game&# ;, one of the most famous short &# ; continue reading &# ; supposedly green cryptocurrency chia is just another way of wasting resources foreign policy magazine sign in subscribe subscribe upgrade to insider upgrade to insider latest news analysis podcasts the magazine channels economics security shadow government her power   close newsletters events your fp insider access: power maps fp live special reports search latest china is protecting its thin corridor to the afghan heartland the wakhan corridor is a fiercely contested imperial hangover. argument | sam dunning nothing but pitch black darkness ahmed rabbani’s journey through the u.s. dark prison system to guantanamo feature | fatima bhutto the end of coal is coming sooner than you think despair elides the progress made over the last two decades. argument | ketan joshi the coming afghan refugee crisis is only a preview more desperate migrants will head west in coming years—and the west’s migration policies must change in response. argument | anatol lieven see all stories fp events fp studios fp analytics fp peacegames subscription services reprint permissions writer’s guidelines work at fp fp guides – graduate education fp for education fp archive buy back issues meet the staff advertising/partnerships contact us privacy policy sign in subscribe subscribe upgrade to insider upgrade search toggle display of website navigation argument: chia is a new way to waste resources for cryptocurrency chia is a new way to waste resources for c... share: argument an expert's point of view on a current event. chia is a new way to waste resources for cryptocurrency what bitcoin does for electricity and ethereum for video cards, chia does for hard disks. by david gerard, the author of the book attack of the foot blockchain and the cryptocurrency and blockchain news blog of the same name. an employee displays a physically destroyed hard disk drive at the tokyo eco recycle company on jan. , . toshifumi kitamura/afp via getty images may , , : am bitcoin, the first cryptocurrency, has a problem: it uses ghastly quantities of electricity and thus generates as much carbon emissions as a medium-sized country. this is by design. a new cryptocurrency, chia, avoids this problem—in favor of creating huge amounts of a different kind of waste. bitcoin was meant to be decentralized so as to stay out of any central control. the “proof-of-work mining” process allocates fresh coins by a lottery. you enter this lottery by guessing numbers and running calculations on them as fast as possible—that is, you waste electricity to show your commitment. there is one winner every minutes; as more people join the lottery, the guessing gets harder to stay at one winner every minutes. bitcoin, the first cryptocurrency, has a problem: it uses ghastly quantities of electricity and thus generates as much carbon emissions as a medium-sized country. this is by design. a new cryptocurrency, chia, avoids this problem—in favor of creating huge amounts of a different kind of waste. bitcoin was meant to be decentralized so as to stay out of any central control. the “proof-of-work mining” process allocates fresh coins by a lottery. you enter this lottery by guessing numbers and running calculations on them as fast as possible—that is, you waste electricity to show your commitment. there is one winner every minutes; as more people join the lottery, the guessing gets harder to stay at one winner every minutes. as long as people can make money wasting electricity, they’ll add more computing resources to win more bitcoins in an ever-escalating arms race. bitcoin thus uses as much electricity as the netherlands. proof of work has economies of scale: the bigger you are, the more efficiently you can create lottery tickets. despite the grandiose claims of putting financial power in the public’s hands, bitcoin mining functionally centralized by . the majority of bitcoin mining is three large pools. an electricity outage in one small area of xinjiang in april took a quarter of all bitcoin mining offline. bitcoin mining also uses specialized computers that just calculate cryptographic hashes as fast as possible; once the mining computers are obsolete, they’re just e-waste. other cryptocurrencies are similarly wasteful. ethereum uses as much electricity as peru. there are smaller cryptocurrencies that don’t use this process, but bitcoin and ethereum are the two cryptos that are widely exchangeable for actual money. cryptos failed as usable currencies, so their only remaining use case is to be traded in the hope of actual money. bram cohen is famed as the creator of the hugely popular bittorrent file distribution protocol. cohen turned his attention to the proof-of-work problem. he explicitly wanted a “green bitcoin,” so chia, founded by cohen, works very much like bitcoin apart from proof of work. chia’s business white paper advocates the same conspiracy theory economics that was embraced by the bitcoin subculture: it assumes that governments fundamentally cannot be trusted to issue money and wasting a country’s worth of electricity is a better alternative. the resource cohen chose to use for his so-called green cryptocurrency, chia, was computer hard disk space. this is a generic, reusable form of computer hardware, it’s widely available, and he thought this would use less electricity than proof of work. cohen anticipated that casual chia users could use “the unused storage of your laptop, desktop, or corporate network.” to “farm” chia, the software writes a “plot,” a large chunk of cryptographic data, to the disk. the chia blockchain software broadcasts a “challenge” every seconds or so, , times a day; if you have a close enough answer to the challenge, you win two fresh chia tokens. as more disk space is added to the network, the challenges get harder. cohen’s company, chia network, secured venture capital funding in and developed the chia software. the network was launched in march , with the promise users could run it in a “normal apartment.” chia’s business white paper assumes that hard disk space is “over-provisioned.” however, aspiring chia farmers bought hard disks in vast quantities, thousands of terabytes at a time—as they only had to spend less money than they expected to make back. during the covid- pandemic, manufacturing supply chains were already disrupted in multiple industries, leading to shortages of many basic components. by april, just a month after it was launched, chia farmers were straining the hard disk market, with reports from hong kong of large disks, over terabytes, having tripled in price. hard disk shortages and price rises were reported across southeast asia and in the united states. chia’s initial plotting process is usually done on a solid-state drive (ssd), such as you’d find in a desktop or laptop. in normal usage, a modern ssd will last over a decade; an ssd that’s plotting chia may burn out in less than six weeks. ssd manufacturers are now refusing to honor warranties on ssds used for crypto mining. secondhand ssds and hard disks manufactured since can no longer be trusted not to be burnt-out wrecks. in germany, the popular cloud service hetzner has banned chia farming. instead of carbon dioxide, chia produces vast quantities of e-waste—rare metals, assembled into expensive computing components, turned into toxic near-unrecyclable landfill within weeks. cohen has tweeted that the claim that chia destroys disks is mostly “just plain wrong”—though he ends the tweet thread by effectively admitting that it’s true but blames users for using “consumer ssd,” even though chia’s own faq states that it can be run on mobile phones or laptops. chia plotting is heavy on electricity, too—plotting requires arbitrary calculations by a computing device’s central processing unit (cpu), an intensive task. chia’s business white paper anticipates farming on “one raspberry pi” (a small computer about as powerful as a iphone)—but in practice, chia plotting requires multiple cpu threads running continuously at close to percent. chia failed at decentralization for the same reason that bitcoin did: centralization is more efficient. the largest chia pool, hpool, is winning percent of chia farming rewards and increasing. smaller chia farmers have complained that hpool was given a head start by chia network. the first million chia coins were created ahead of time and are held by chia network, in anticipation of being distributed in the event that chia network holds an initial public offering. chia ran headlong into the known psychology of cryptocurrency mining: people will do anything that will generate a net profit—and damn the externalities. cryptocurrency mining has also trashed the market for computer video cards. bitcoin mining uses specialized chips that can only mine bitcoin; but ethereum and many other “altcoins” that use proof of work are still mined on video cards as they’re well suited to complex numerical computation. with the price of bitcoin in an economic asset bubble, the other coins have gone up as well; so high-end nvidia video cards are all but unavailable, with prices going through the roof and the cards being snapped up as quickly as possible. the latest nvidia cards have resorted to drivers—the software that runs the hardware—that detect and block cryptocurrency mining. and, just as with hard disks, secondhand video cards can’t be trusted not to be burnt-out wrecks. almost any service that can do general computation is immediately swarmed by parasitical crypto miners. continuous integration (ci) systems take computer program source code, and build it afresh after every change, to allow quick testing of all changes. some public ci services used to offer a free tier for small projects—but crypto miners started spamming these services with cpu-based crypto mining. one ci service engineer said: “if we, for example, had a team of working on our ci offering, we would have re-allocated at least % of them to work full-time on combating the miners. and this trend is not slowing, it is only accelerating.” cryptocurrency decentralization is a performative waste of resources in order to avoid having to trust a government to issue currency. but since cryptocurrencies don’t actually function as currencies, it just generates new types of otherwise worthless magic beans to sell for real money. your system will waste unlimited amounts of whatever resource you’re throwing away—and incentivize the theft of whatever resources other people can waste to turn into money. cryptocurrency spews out a country’s worth of carbon dioxide and mountains of toxic e-waste, makes basic computing hardware that could be used for productive purposes unavailable, and destroys any sort of commons that someone might want to offer the world if general computation could be done on it. decentralized cryptocurrencies are a cyberpunk parody of unregulated capitalism. they are a disastrous resource drain on the world, by design. the designers look only for fresh resources to abuse. the only functional purpose of decentralized cryptocurrencies is to further idiosyncratic bitcoin economic ideas that don’t work in the hope of making money from speculation. every cryptocurrency is a new form of waste—and the only way to stop that is to stop cryptocurrencies. david gerard is the author of the book attack of the foot blockchain and the cryptocurrency and blockchain news blog of the same name. his new book is libra shrugged: how facebook tried to take over the money. tags: finance and banking, united states new for subscribers: want to read more on this topic or region? click + to receive email alerts when new stories are published on united states read more twitter got lucky with the great bitcoin heist the social media giant’s security failures could have allowed far more damage. argument | david gerard latest china is protecting its thin corridor to the afghan heartland august , , : am nothing but pitch black darkness august , , : am the end of coal is coming sooner than you think august , , : pm the coming afghan refugee crisis is only a preview august , , : pm china is repeating u.s. mistakes with its own global arrogance august , , : pm see all stories trending what went wrong with afghanistan’s defense forces? the taliban are breaking bad china is protecting its thin corridor to the afghan heartland south asia is on the front lines of the climate crisis latest analysis china is protecting its thin corridor to the afghan heartland sam dunning the end of coal is coming sooner than you think ketan joshi the coming afghan refugee crisis is only a preview anatol lieven more from foreign policy why china is cracking down on private tutoring regulations on the $ billion industry reflect concerns over the education rat race. the science says everyone needs a covid- booster shot—and soon the biology of the delta variant has made mass revaccination an urgent necessity. how pakistan could become biden’s worst enemy the united states is banking on islamabad to broker successful peace talks with the taliban. that’s not likely to happen. ‘we fell off the face of the earth’ opposition politicians are disappearing into turkey’s massive new prison system. trending what went wrong with afghanistan’s defense forces? dispatch | lynne o’donnell the taliban are breaking bad dispatch | lynne o’donnell china is protecting its thin corridor to the afghan heartland argument | sam dunning south asia is on the front lines of the climate crisis south asia brief | michael kugelman latest china is protecting its thin corridor to the afghan heartland august , , : am nothing but pitch black darkness august , , : am the end of coal is coming sooner than you think august , , : pm the coming afghan refugee crisis is only a preview august , , : pm china is repeating u.s. mistakes with its own global arrogance august , , : pm see all stories sign up for morning brief foreign policy’s flagship daily newsletter with what’s coming up around the world today from foreign policy’s newsletter writer colm quinn. enter your email sign up ✓ signed up unsubscribe by signing up, i agree to the privacy policy and terms of use and to occasionally receive special offers from foreign policy. you can support foreign policy by becoming a subscriber. subscribe today fp events fp studios fp analytics fp peacegames subscription services reprint permissions writer’s guidelines work at fp fp guides – graduate education fp for education fp archive buy back issues meet the staff advertising/partnerships contact us privacy policy powered by wordpress vip © , the slate group the well - wikipedia the well from wikipedia, the free encyclopedia jump to navigation jump to search this article is about the whole earth electronic link online community. for other uses, see well (disambiguation). internet community this article needs additional citations for verification. please help improve this article by adding citations to reliable sources. unsourced material may be challenged and removed. find sources: "the well" – news · newspapers · books · scholar · jstor (august ) (learn how and when to remove this template message) whole earth 'lectronic link type of site virtual community available in english owner the well group inc. url well.com launched february  ;  years ago ( - )[ ] the whole earth 'lectronic link, normally shortened to the well, is one of the oldest virtual communities in continuous operation. as of june , it had , members.[needs update][ ] it is best known for its internet forums, but also provides email, shell accounts, and web pages. the discussion and topics on the well range from deeply serious to trivial, depending on the nature and interests of the participants.[ ] contents history topics of discussion policy and governance journalists in the news virtual community and social network difference publications about the well see also references external links history[edit] the well was started by stewart brand and larry brilliant in , and the name (an acronym for whole earth 'lectronic link)[ ] is partially a reference to some of brand's earlier projects, including the whole earth catalog. initially the well was owned % by the point foundation (publishers of the whole earth catalog and whole earth review) and % by neti technologies inc. a vancouver-based company of which larry brilliant was at that time the chairman. the well began as a dial-up bulletin board system (bbs) influenced by eies,[ ] became one of the original dial-up isps in the early s when commercial traffic was first allowed, and changed into its current form as the internet and web technology evolved. its original management team—matthew mcclure, soon joined by cliff figallo and john coate—collaborated with its early users to foster a sense of virtual community.[citation needed] gail ann williams was hired by figallo in , as community manager, and has continued in management roles into the current era. from to the well was owned by bruce r. katz, founder of rockport, a manufacturer of walking shoes.[ ] in april it was acquired by salon, several of whose founders such as scott rosenberg had previously been regular participants there. in august salon announced that it was looking for a buyer for the well, in order to concentrate on other business lines. in november , a press release of the well said, "as salon has not found a suitable purchaser, it has determined that it is currently in the best interest of the company to retain this business and has therefore suspended all efforts to sell the well."[ ] in june salon once again announced that it was looking for a buyer for the well as its subscriber base "did not bear financial promise". additionally, it announced that it had entered into discussions with various parties interested in buying the well.com domain name, and that the remaining well staff had been laid off at the end of may.[ ] the community pledged money to take over the well itself and rehire important staff.[ ] in september , salon sold the well to a new corporation, the well group inc., owned by a group of eleven investors, who are all long-time members. the ceo was earl crabb, who died on february , . the sale price was reported to be $ , . members have no official role in the management, but "can ... go back to what they do best: conversation. and complaining about the management."[ ][ ] notable items in well history include being the forum through which john perry barlow, john gilmore, and mitch kapor, the founders of the electronic frontier foundation, first met. howard rheingold, an early and very active member, was inspired to write his book the virtual community by his experience on the well. according to rheingold's book, the well's usenet feed was for years provided by apple computer over uucp. the well was a major online meeting place for fans of the grateful dead, especially those who followed the band from concert to concert, in the late s and early s. the well also played a role in the book takedown about the pursuit and capture of kevin mitnick. founded in sausalito, california, the service is now based in emeryville. topics of discussion[edit] the well is organized into subject areas known as "conferences". these conferences reflect member interests, and include arts, health, business, regions, hobbies, spirituality, music, politics, games, software and many more. within conferences, members open separate conversational threads called "topics" for specific items of interest. for example, the media conference has (or had) topics devoted to the new york times, media ethics, and the luann comic strip. an example of a local conference is the one on san francisco, which has topics on restaurants, the city government, and neighborhood news. "public" conferences are open to all members, while "private" conferences are restricted to a list of users controlled by the conference hosts, called the "ulist". some "featured private" or "private independent" conferences (such as "women on the well" and "recovery") are listed in the well's directory, but are access-restricted for privacy or membership-restriction reasons. members may request admission to such conferences. there are also a large number of unlisted "secret private" conferences. the names of these conferences are public, but the contents, hosts, and members are restricted to members of a particular conference. membership in private conferences is by invitation. well members may open their own new public or private independent conferences. policy and governance[edit] the directors of the well have included matthew mcclure and cliff figallo, both veterans of the s commune called the farm, and gail williams, previously known as one of the principals in the political satire group the plutonium players. in , the well hired christian ruzich and daryl lynn johnson, who have over years of combined experience on the well, to be the general managers. the couple, who met on the well, will draw on their years of marketing and online community experience to help the well become the prime destination for premium online conversation and discussion. the community forums, known as "conferences", are supervised by "conference hosts" who guide conversations and may enforce conference rules on civility and/or appropriateness. initially all hosts were selected by staff members. in , gail williams changed the policies to enable user-created forums. participants can create their own "independent" personal conferences—either viewable by any well member or privately viewable by those members on a restricted membership list—on any subject they please with any rules they like. overall support and supervision of the conferencing services is handled by several staff members, often referred to collectively as "confteam", the name of the unix user account used by staff for conference maintenance. they have more system operational powers than conference hosts, along with the additional social authority of selecting "featured conference" hosts and closing accounts for abuse. well members use a consistent login name when posting messages, and a non-fixed pseudonym field alongside it. the "pseud" (in well parlance) defaults to the user's real name, but can be changed at will and so often reflects a quotation from another user, or is an in-joke, or may be left blank. the user's real name can be easily looked up using their login name. well members are not anonymous. there is a time-honored double meaning to the well slogan coined by stewart brand, "you own your own words" or ("yoyow"): members have both the rights to their posted words and responsibility for those words, too. (members can also delete their posts at any time, but a placeholder indicates the former location and author of a deleted or "scribbled" post, as well as who deleted it.) journalists[edit] the well was frequently mentioned in the media in the s and s, probably disproportionately to the number of users it had relative to other online systems. this has diminished but not disappeared in recent years, with other online communities becoming commonplace. this early visibility was largely the result of the early policy of providing free accounts for interested journalists and other select members of the media. as a result, for many journalists it was their first experience of online systems and, later, the internet, even though other systems existed. although accounts are now seldom provided for free to journalists, there are still a sizable number on the well; for example columnist jon carroll of the san francisco chronicle, wendy m. grossman of the inquirer, and critic andy klein of los angeles citybeat. the well also received numerous awards in the s and s, including a webby award for online community in , and an eff pioneer award in . in the news[edit] in march , the well was noted for refusing membership to kevin mitnick, and refunding his membership fee.[ ] virtual community and social network difference[edit] this section possibly contains original research. please improve it by verifying the claims made and adding inline citations. statements consisting only of original research should be removed. (december ) (learn how and when to remove this template message) there is often confusion between a virtual community and social network. they are similar in some aspects because they both can be used for personal and professional interests. a social network offers an opportunity to connect with people one already knows or is acquainted with. facebook and twitter are social networks. platforms such as linkedin and yammer open up communication channels among coworkers and peers with similar professions in a more relaxed setting. often social media guidelines are in place for professional usage so that everyone understands what is suitable online behavior.[ ][ ] using a social network is an extension of an offline social community. it is helpful in keeping connections among friends and associates as locations change. move. each user has their own spider web structure which is their social network.[ ][ ] virtual communities differ in that users aren't connected through a mutual friend or similar backgrounds. these groups are formed by people who may be complete strangers but have a common interest or ideology.[ ][ ] virtual communities connect people who normally wouldn't consider themselves to be in the same group.[ ] these groups continue to stay relevant and maintained in the online world because users feel a need to contribute to the community and in return feel empowered when receiving new information from other members. virtual communities have an elaborate nest structure because they overlap. yelp, youtube, and wikipedia are all examples of a virtual community. companies like kaiser permanente launched virtual communities for members. the community gave members the ability to control their health care decisions and improve their overall experience.[ ] members of a virtual community are able to offer opinions and contribute helpful advice. again, the difference between virtual communities and social network is the emergence of the relationship. the well distinguished itself from the technology of the time by creating a networked community for everyone. users were responsible and owned the content posted, a rule created to protect the information from being copyrighted and commoditized.[ ] women particularly were able to find community and voice on the well. while largely bound to household work at the time, women of the well could be participants and contributors on message boards by sharing experiences and information.[ ] publications about the well[edit] howard rheingold, the virtual community ( ) perennial isbn  - - - (hardcover) – isbn  - - - ( revised paperback edition) john seabrook, deeper: my two-year odyssey in cyberspace ( ) simon & schuster isbn  - - - (hardcover) – isbn  - - - (paperback) katie hafner, the well: a story of love, death and real life in the seminal online community ( ) carroll & graf publishers isbn  - - - katie hafner's book, expanded from a wired magazine article, chronicles the odd birth, growing pains, and interpersonal dynamics that make the well the unusual, perhaps unique online community that it is. fred turner, from counterculture to cyberculture: stewart brand, the whole earth network, and the rise of digital utopianism ( ) university of chicago press isbn  - - - "where the counterculture met the new economy: the well and the origins of virtual community", technology and culture, vol. , no. (july, ), pp. – . tierney, john. "stewart brand - john tierney - an early environmentalist, embracing new 'heresies'". the new york times, february , , sec. environment. https://www.nytimes.com/ / / /science/earth/ tier.html. kirk, andrew. "appropriating technology: the whole earth catalog and counterculture environmental politics". in environmental history, – , . counterculture green: the whole earth catalog and american environmentalism. lawrence, kan: university press of kansas, . see also[edit] william h. calvin cix cyberia (book) hugh daniel digerati brian eno global business network michael gruber (author) peter ludlow tom mandel declan mccullagh douglas rushkoff john seabrook gail williams references[edit] ^ pernick, ron ( ). "a timeline of the first ten years of the well". retrieved - - . ^ "the well, a pioneering online community, is for sale again". the new york times, june , ^ "well, the.". encyclopædia britannica ultimate reference suite. encyclopædia britannica. . isbn  . ^ learn about the well well.com ^ "irc history -- electronic information exchange system (eies)". ^ markoff, john (january , ). "company news; influential computer service sold". new york times. retrieved july . ^ "the well to stay with salon" (press release). the well. november , . retrieved - - . ^ "salon k filing, june ". . retrieved - - . ^ "will the well survive? members pledge $ k+ to buy influential virtual community from corporate owners". . retrieved - - . ^ salon media group sells the well to the well group archived - - at the wayback machine ^ grossman, wendy. "salon sells the well to its members". retrieved july . ^ kevin mitnick is unforgiven wired, march , ^ mahlberg, t. ( ). alter-identity work via social media in professional service contexts. in proceedings of the th australasian conference on information systems (acis ). chicago ^ "what is social network? - definition from whatis.com". searchcio. retrieved - - . ^ clauset, a. ( ). finding local community structure in networks. physical review e, ( ), . ^ a b "social network vs. online community: what is the difference?". social media today. retrieved - - . ^ wellman, b., & gulia, m. ( ). virtual communities as communities. communities in cyberspace, - . ^ a b "examples of virtual communities". encyclopedia.jrank.org. retrieved - - . ^ a b from counterculture to cyberculture. turner, fred. university of chicago press. . isbn  - . oclc  .cs maint: others (link) external links[edit] the well "the well gopher". archived from the original on - - . retained in as a text museum and served via http till around wired news: salon buys the well wired magazine: "the epic saga of the well" by katie hafner the well: small town on the internet highway system by cliff figallo c|net news.com: "the well celebrates th birthday" at archive.today (archived - - ) net wars at the inquirer: "you own your own th anniversary" c|net news.com: "salon places the well up for sale" at archive.today (archived - - ) v t e whole earth catalog editors stewart brand lloyd kahn jay kinney kevin kelly howard rheingold anne herbert ken kesey (guest) community portola institute point foundation the well other publications coevolution quarterly whole earth software catalog and review whole earth review/whole earth related topics whole earth access whole internet user's guide and catalog long now foundation wired buckminster fuller james baldwin ats- earthrise retrieved from "https://en.wikipedia.org/w/index.php?title=the_well&oldid= " categories: establishments in the united states bulletin board systems culture of san francisco gopher (protocol) internet forums pre–world wide web online services shell account providers webby award winners whole earth catalog internet properties established in hidden categories: webarchive template wayback links cs maint: others articles with short description short description matches wikidata articles needing additional references from august all articles needing additional references wikipedia articles in need of updating from august all wikipedia articles in need of updating all articles with unsourced statements articles with unsourced statements from july articles that may contain original research from december all articles that may contain original research webarchive template archiveis links navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages español euskara français italiano 日本語 Русский svenska edit links this page was last edited on july , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement what i learned today… what i learned today… taking a break i&# ;m sure those of you who are still reading have noticed that i haven&# ;t been updating this site much in the past few years. i was sharing my links with you all but now delicious has started adding ads to that. i&# ;m going to rethink how i can use this site effectively going forward. for [&# ;] bookmarks for may , today i found the following resources and bookmarked them on delicious. start a fire grow and expand your audience by recommending your content within any link you share digest powered by rss digest bookmarks for april , today i found the following resources and bookmarked them on delicious. mattermost mattermost is an open source, self-hosted slack-alternative mblock program your app, arduino projects and robots by dragging &# ; dropping fidus writer fidus writer is an online collaborative editor especially made for academics who need to use citations and/or formulas. beek social network for [&# ;] bookmarks for february , today i found the following resources and bookmarked them on delicious. connfa open source ios &# ; android app for conferences &# ; events paperless scan, index, and archive all of your paper documents foss serve foss serve promotes student learning via participation in humanitarian free and open source software (foss) projects. disk inventory x disk inventory x is [&# ;] bookmarks for january , today i found the following resources and bookmarked them on delicious. superpowers the open source, extensible, collaborative html d+ d game maker sequel pro sequel pro is a fast, easy-to-use mac database management application for working with mysql databases. digest powered by rss digest bookmarks for december , today i found the following resources and bookmarked them on delicious. open broadcaster software free, open source software for live streaming and recording digest powered by rss digest bookmarks for november , today i found the following resources and bookmarked them on delicious. numfocus foundation numfocus promotes and supports the ongoing research and development of open-source computing tools through educational, community, and public channels. digest powered by rss digest bookmarks for november , today i found the following resources and bookmarked them on delicious. smore smore makes it easy to design beautiful and effective online flyers and newsletters. ninite install and update all your programs at once digest powered by rss digest bookmarks for november , today i found the following resources and bookmarked them on delicious. vim adventures learning vim while playing a game digest powered by rss digest bookmarks for november , today i found the following resources and bookmarked them on delicious. star wars: building a galaxy with code digest powered by rss digest bookmarks for october , today i found the following resources and bookmarked them on delicious. open food facts open food facts gathers information and data on food products from around the world. digest powered by rss digest bookmarks for october , today i found the following resources and bookmarked them on delicious. versionpress wordpress meets git, properly. undo anything (including database changes), clone &# ; merge your sites, maintain efficient backups, all with unmatched simplicity. digest powered by rss digest bookmarks for october , today i found the following resources and bookmarked them on delicious. sogo share your calendars, address books and mails in your community with a completely free and open source solution. let your mozilla thunderbird/lightning, microsoft outlook, android, apple ical/iphone and blackberry users collaborate using a modern platform. gitbook gitbook is a modern publishing toolchain. making [&# ;] bookmarks for october , today i found the following resources and bookmarked them on delicious. discourse discourse is the % open source discussion platform built for the next decade of the internet. it works as a mailing list, a discussion forum, and a long-form chat room digest powered by rss digest bookmarks for september , today i found the following resources and bookmarked them on delicious. zulip a group chat application optimized for software development teams digest powered by rss digest bookmarks for september , today i found the following resources and bookmarked them on delicious. idonethis reply to an evening email reminder with what you did that day. the next day, get a digest with what everyone on the team got done. digest powered by rss digest bookmarks for september , today i found the following resources and bookmarked them on delicious. vector vector is a new, fully open source communication and collaboration tool we’ve developed that’s open, secure and interoperable. based on the concept of rooms and participants, it combines a great user interface with all core functions we need (chat, file transfer, voip and [&# ;] bookmarks for september , today i found the following resources and bookmarked them on delicious. roundcube free and open source webmail software bolt bolt is an open source content management tool, which strives to be as simple and straightforward as possible. it is quick to set up, easy to configure, uses elegant templates, and above all: it’s a joy [&# ;] bookmarks for september , today i found the following resources and bookmarked them on delicious. madeye madeye is a collaborative web editor backed by your filesystem. digest powered by rss digest bookmarks for september , today i found the following resources and bookmarked them on delicious. gimlet your library&# ;s questions and answers put to their best use. know when your desk will be busy. everyone on your staff can find answers to difficult questions. digest powered by rss digest bookmarks for september , today i found the following resources and bookmarked them on delicious. thimble by mozilla thimble is an online code editor that makes it easy to create and publish your own web pages while learning html, css &# ; javascript. google coder a simple way to make web stuff on raspberry pi digest powered by rss digest bookmarks for august , today i found the following resources and bookmarked them on delicious. mediagoblin mediagoblin is a free software media publishing platform that anyone can run. you can think of it as a decentralized alternative to flickr, youtube, soundcloud, etc. the architecture of open source applications a web whiteboard a web whiteboard is touch-friendly online whiteboard app [&# ;] bookmarks for august , today i found the following resources and bookmarked them on delicious. computer science learning opportunities we have developed a range of resources, programs, scholarships, and grant opportunities to engage students and educators around the world interested in computer science. digest powered by rss digest bookmarks for august , today i found the following resources and bookmarked them on delicious. pydio the mature open source alternative to dropbox and box.net digest powered by rss digest bookmarks for july , today i found the following resources and bookmarked them on delicious. hylafax the world&# ;s most advanced open source fax server digest powered by rss digest winkle: foiling long-range attacks in proof-of-stake systems winkle: foiling long-range attacks in proof-of-stake systems sarah azouvi university college london, protocol labs george danezis university college london, facebook novi valeria nikolaenko facebook novi abstract winkle protects any validator-based byzantine fault tolerant con- sensus mechanisms, such as those used in modern proof-of-stake blockchains, against long-range attacks where old validators’ sig- nature keys get compromised. winkle is a decentralized secondary layer of client-based validation, where a client includes a single additional field into a transaction that they sign: a hash of the previ- ously sequenced block. the block that gets a threshold of signatures (confirmations) weighted by clients’ coins is called a “confirmed” checkpoint. we show that under plausible and flexible security assumptions about clients the confirmed checkpoints can not be equivocated. we discuss how client key rotation increases secu- rity, how to accommodate for coins’ minting and how delegation allows for faster checkpoints. we evaluate checkpoint latency ex- perimentally using bitcoin and ethereum transaction graphs, with and without delegation of stake. acm reference format: sarah azouvi, george danezis, and valeria nikolaenko. . winkle: foil- ing long-range attacks in proof-of-stake systems. in proceedings of acm advances in financial technologies: aft’ , , october – , (new-york ’ ), pages. https://doi.org/ . /nnnnnnn.nnnnnnn introduction a number of blockchains are considering proof-of-stake mech- anisms in place of proof-of-work, attracted by faster and deter- ministic finality as well as lower energy costs. in proof-of-stake blockchains a set of validators run a consensus protocol between themselves and agree on the next block of ordered transactions by collectively signing the block. such protocols rely on the long- term security of validators’ signature keys and a compromise of validators’ past keys threatens full auditability through long-range attacks [ ]. a long-range attack is considered successful if an adversary was able to create an alternative chain of transactions, starting with the same genesis block, that can not be distinguished from the real chain. a number of solutions have been proposed to foil long-range attacks such as publishing checkpoints off-chain in software updates or in a reputable archive (such as the front page of a national newspaper). those solutions are difficult to deploy without introducing a highly unsatisfactory vector of centralization. permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. copyrights for components of this work owned by others than acm must be honored. abstracting with credit is permitted. to copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. request permissions from permissions@acm.org. new-york ’ , october – , , © association for computing machinery. acm isbn -x-xxxx-xxxx-x/yy/mm...$ . https://doi.org/ . /nnnnnnn.nnnnnnn validator key rotations help alleviate the problem, assuming secure destruction of older keys. however, validators might have auxiliary incentives to sell their old keys to an adversary, espe- cially when real-world identities of validators are unknown in a permissionless system and reputation is not at risk. when dishonest behaviour of a validator becomes rational, real-world security of the whole system is at great risk. we notice that corrupting a signif- icant number of coin holders, even after they have no more stake in the system, is far more challenging as they are much more numer- ous than validators (we justify this assumption in section ). this observation brings us to introducing winkle — a novel mechanism that leverages votes from clients creating a decentralized secondary layer of client-based validation to confirm checkpoints (snapshots of the blockchain) and to prevent long-range attacks on proof-of- stake protocols. the voting mechanism is very simple: each client augments their transaction with a single additional field — a hash of a previously sequenced block. once this transaction gets signed by the client and submitted to the chain, it serves as a vote or a confirmation for a block weighted by the number of coins the client holds under their account. the block that gets a threshold of con- firmations is called a “confirmed” checkpoint. we show that under plausible and flexible security assumptions confirmed checkpoints can not be equivocated and serve as irreversible snapshots of the blockchain. our contributions. we design winkle to strengthen consensus protocols with dynamically changing validators against long-range attacks (sec. - ). though winkle can secure other systems as well, it is mainly applicable to blockchains based on byzantine fault tolerant (bft) consensus such as pbft [ ], librabft [ ], tender- mint [ ], hotstuff [ ], or sbft [ ]. our solution does not require unincentivized validators to remain honest. instead, winkle allows each coin holder to augment a transaction with a vote for a previ- ous block. we prove that after a critical mass of coins has voted for a block, the block becomes a “confirmed checkpoint” and can- not be equivocated even if validators who worked on constructing this block have leaked their keys to the adversary. furthermore, in systems protected by winkle, an adversary enacting a long-range attack and building a forking chain cannot freely replicate transac- tions from honest users since those commit to checkpoints on the real chain. in systems with probabilistic finality (not typical proof- of-stake systems) this mechanism also gives protection against replay attacks as was previously observed in [ ]. we give plausible and flexible security assumptions (sec. ), showing trade-offs between the fraction of byzantine accounts and the quorum size. we notice that the assumptions are far more flex- ible than the byzantine bounds in the bft protocols, e.g. if the disconnected users constitute a negligible fraction, the quorum size in winkle can be just slightly above the bound on the byzantine accounts. our assumptions are also tuned to key rotations, allowing https://doi.org/ . /nnnnnnn.nnnnnnn https://doi.org/ . /nnnnnnn.nnnnnnn new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko accounts to recover following a compromise. we put up a defini- tion for the long-term security of the validator-based consensus protocol (sec. ) and prove that winkle satisfies the definition over- coming challenges of weighted-by-stake voting in the presence of constantly moving weights. we introduce a delegation mechanism, where less active accounts can delegate their stake to more active accounts, in order to facilitate faster confirmation of checkpoints (e.g. a cold wallet could delegate to a hot wallet). we discuss how to safeguard minting of coins. finally, we simulate winkle on the real-world datasets of bitcoin and ethereum and evaluate check- pointing delays with and without delegates (sec. ) showing that the block can be confirmed within several hours to a few days, allowing validators to safely leak their old keys after a few days of use. we discuss related work, other applications and future research directions (sec. - ). background and research question an account-based blockchain model. a blockchain maintains an evolving decentralized database that keeps track of the owner- ship of assets, and allows their transfer according to rules encoded in transactions. we represent the database, at any consistent point, as a key-value store, which maps account addresses to account states: db = {(aj,statej) | j = , , . . . ,n}, where aj ∈ { , } is the account address, statej ∈s is the value under the account, n is the number of accounts. the account state holds the following values: - pk: public key for signature verification, - seq: an incrementing sequence number, that prevents replay attacks, - value: number of coins, that maps to the “voting power”. the account’s state may also hold other meta-data or auxiliary in- formation. to simplify notation we write db[a] to denote account a’s state and use field notation to represent its values: db[a].pk, db[a].seq, db[a].value. we denote by db.n the number of ac- counts in the database and by db.stot the total number of coins, both numbers may be changing with the modifications to the data- base, i.e. when accounts and coins get created or destroyed. transactions. the database evolves, from one consistent state to the next, via processing transactions. a transaction (tx) is comprised of the following data: - sender: the address of the transaction’s sender, - seq: sequence number that should match the sender’s account seq plus one, - program: an efficiently computable deterministic function that mutates the state of the database, i.e. program(db)→ db′, - σ: a cryptographic signature over the previous fields. to clarify exposition we represent those values as tx.sender,tx.seq, tx.program, tx.σ . furthermore, we assume that we can derive from tx.program a list of accounts that receive coins, denoted as tx.raddrs, and corresponding amounts. in this work we principally consider programs associated with asset transfers implementing a monetary system, relevant to cryptocurrencies, but we also allow transactions mutating other meta-data of user accounts (e.g. for key rotation and delegation introduced later). validator based consensus with reconfiguration. we abstract a validator based consensus with reconfiguration (vbcr) as a mechanism that provides a chain of collectively signed blocks on which consensus has been reached. in this work a block refers to a long sequence of transactions (e.g. a day of activity or an epoch of a different duration), rather than a short-term block within the low-level consensus protocol. the first block (genesis block) determines the initial state of the database: b = db . a chain of blocks (b ,b , . . .) modifies the state of the database. each block bi for i > is of the form bi = [hi− ,ti,σi], where hi− is the hash of the previous block, i.e. hi− = h(bi− ); ti is an ordered list of transactions and σi is a signature over (hi− ,ti). given a genesis state db and the chain of blocks, we define recursively dbi = ti(dbi− ). here dbi is a result of application of programs in transactions’ list ti one by one in a given order to dbi− . the system is governed by the consensus protocol run between the validators. the validator set is determined by the set of veri- fication keys: v = {pkk} |v | k= . each signature key skk is private to k-th validator. we assume there is a function gov(·)on the previous block (or the genesis block initially) , that returns the current set of validators: vi = gov(bi− ). typically, we consider the signature σi in the block bi = [hi− ,ti,σi] as being valid if a certain threshold of validators vi = gov(bi− ) have contributed to the aggregate signature σi . for each block bi we define a function seq(·) which returns the height of the block namely the number of blocks preceding it up to the genesis block b , the function implicitly takes the chain of blocks as input. by convention, we use a subscript of the block to denote the height, e.g. i = seq(bi). chain validation. a public verification function validate(·, ·) de- fines a predicate that takes a chain of blocks as a first argument and a new block as a second argument: validate(b,bi+ ). the output in- dicates whether the block bi+ can be successfully chained with the previous blocks b = (b , . . . ,bi). the predicate encodes the busi- ness logic of the state machine describing the blockchain. a chain of blocks b = (b ,b , . . . ,bi) is valid if either b = (b ) or recur- sively computed predicate is true, i.e. ∀i > validate(b,bi)= true. auditing any chain of blocks b, would start at block b , append the blocks one-by-one and test the validity of the chain incremen- tally. the predicate validate checks validators’ signature on the new block, checks transactions’ signatures within the new block and applies transactions one-by-one to the database state, checking the sequence numbers to prevent replay attacks. in more details, taking as input b = (b , . . . ,bi) and bi+ , the predicate assumes that ∀j ≤ i validate((b , . . . ,bj− ),bj) = true and h(bi− ) = hi . let the state of the database after applying all the transactions from the chain of blocks b in sequence be dbi . let bi+ = [hi,t,σ]. first, the validators’ public keys are retrieved from the previous block: pki+ = gov(bi), the signature σ is verified under that key: verify(pki, (hi,t),σ). if the verification fails, verify outputs false and halts, otherwise it continues. transactions are processed from the list t = (t , . . . ,tk) in order. let db i = dbi and for j = , . . . ,k we verify that: ( ) the sequence number is equal to the value under in the vbcr abstraction block’s boundaries capture events of validators’ set change, as those are relevant to long-range attacks. winkle: foiling long-range attacks in proof-of-stake systems new-york ’ , october – , , the account plus one: tj .seq = db j− i [tj .sender].seq+ , ( ) the sig- nature tj .σ verifies under the public key stored under the sender’s account (dbj− i [tj .sender].pk), ( ) additional checks can be applied to validate the transaction depending on the business logic of the blockchain, encoded in the transaction program. if any of the trans- actions fail at least one check then the procedure returns false and halts. if transaction tj passed the checks successfully, it gets applied to the database to advance it’s state: dbji = tj .program(db j− i ), the sequence number under the account is incremented by one. if the transaction was not applied successfully, the procedure outputs false and halts. safety of vbcrs. we say that the chain validation audit is safe if and only if any two parties that successfully completed the chain validation procedure would have a consistent view of the chain of blocks (i.e., the chains will be equivalent or a chain of one party will be a subchain of another party’s chain). definition . . (perpetually honest validator) a perpetually hon- est validator follows the protocol and maintains the secrecy of their signing keys in perpetuity (the adversary may never have access to them), and only signs a single block at each height. if a sufficient number of validators are perpetually honest in keeping all their keys (gov(bi)) unknown to the adversary in per- petuity, then the verification procedure ensures audit safety. long-range attacks on vbcrs. the safety of chain validation audit described depends on validators being perpetually honest. this imposes a heavy burden on the security of the validators’ signature keys: they need to remain secret forever, otherwise the validity of the sequence cannot be safely audited. this motivates us to define eventually compromised validators as follows: definition . . (eventually compromised validator) an eventu- ally compromised validator after a block bi , leaks all its previous signature keys {ski′ | < i ′ < i} to the adversary. any vbcr protocol becomes unauditable under eventually com- promised validators. for block bi and any block bi′ in the history of bi (i′ < i), eventually the adversary will compromise sufficient number of validators and sign an alternative block to follow bi′ with a different set of transactions, defeating the audit safety prop- erty. this problem is referred to in the literature as a long-range attack, and winkle strengthens validator based consensus with reconfiguration systems against those types of attack. we note that the assumption that eventually compromised validators provide all signing keys before height i represents a strong adversary, and subsumes any adversary that may have access to a subset of past keys instead. the compromise of the old keys in bft or proof-of- stake consensus protocols is particularly devastating as creating an alternative chain is computationally inexpensive. the winkle mechanism definitions. without losing generality we assume that the set of validator keys may change at each block. each block defines a checkpoint that coin holders can vote to confirm in the future, as illustrated (see fig. ). a checkpoint is a binding commitment to a chain of blocks sequenced by the vbcr: ckpti = h(b), where b is the last block the checkpoint points to. we abuse the notation and define the height written as a subscript of a checkpoint as being equal to the height of a block it is committing to. in turn this height defines a global order on all checkpoints referring to a sequence of blocks produced by a secure vbcr. we call two checkpoints ckpti , ckptj consistent if they commit to blocks bi , bj such that bi is an ancestor of bj (we write ckpti < ckptj)) or vice versa. checkpoints that are not consistent commit to blocks that are on different sides of a fork and therefore cannot be part of a sequence produced by a safe vbcr. we assume everybody agrees on the genesis checkpoint: ckpt = h(b ). . basic winkle scheme checkpoint voting. each account may vote for a checkpoint by including it in a signed transaction, and every coin carries the vote of its sender (we call this the propagation rule). the vote is also associated with the remaining value in the account. for example, if an account a has coins and sends coin to an account a with a vote for checkpoint ckpt, then there is a system-wide vote with weight of for checkpoint ckpt: of which is associated with a and associated with a . this also means that an account could have different votes for different coins held, e.g., if a received one coin from account a with a vote for ckpt and one coin from another account a that voted for a previous checkpoint ckpt′, then a carries one vote for ckpt and one vote for ckpt′. whenever a sends a new vote, then all of the coins in the account will be counted towards that new vote, and the previous votes are over-written. an account sending a transaction has to vote on the latest available checkpoint, otherwise its transaction will not get sequenced. as we show, there cannot be any two inconsistent checkpoints within one account due to new chain validation rules. more formally, we define a weighted vote as a tuple wvote = (ckpt,w), where ckpt is a checkpoint and w is a real positive number. we augment transactions and accounts with additional information: ( ) we augment each transaction tx with a parameter tx.vote that we call a checkpoint vote: tx.vote = ckpt; ( ) we change the structure of the database: for each account a in place of value db[a].value we now store a set of weighted votes db[a].votes. accounts managed by honest and active users follow some con- straints: ( ) all their votes are for pairwise consistent checkpoints; and ( ) each of their transactions always contains a vote for the latest available checkpoint. condition ( ) introduces a synchrony assumption, however this should not represent a bottleneck: first of all, we assume blocks are abstractions of long periods of consensus under the same set of validators (e.g. many hours or days); and, second, clients need to be aware of the latest checkpoint since they anyway need to know the latest validator set (from the previous block). the set is required in order to know which entities to direct transactions to, and also to be able to authenticate reads from the latest state of the database. chain validation. we augment the predicate validate executed on input b = (b , . . . ,bi) and bi+ with an additional check on each of the transactions in block bi+ (see the ‘chain validation’ rules in section ). each transaction txj in bi+ for j = , . . . must contain a vote on ckpti , where ckpti = h(bi). if the transaction passes the new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko b . . . bi− bi bi+ . . . ckpt ckpti− ckpti ckpti+ pk pki− pki pki+ figure : each block commits the full sequence of previous blocks, and forms a potential checkpoint that clients vote to confirm. validator membership and keys may change across blocks (but not within them). block bi is signed by the key pki− = gov(bi− ), and the key defined in pki = gov(bi) signs bi+ . checks, given the previous set of the database dbj− i (db i = dbi) the next state of the database is the same, except for the following changes. ( ) for each of the receiving accounts a ∈ txj .raddrs that obtain some amount v with transaction txj , if the vote is already in the set: (txj .vote,w) ∈ db j− i [a].votes for some w then v gets added to the weight w of the existing element, otherwise a new tuple (txj .vote,v) is added to the set db j− i [a].votes. ( ) for the sending account a: tx.sender = a, the set dbj− i [a].votes gets squashed into a single element (txj .vote,v), where v is the sum of all the values in the set dbj− i [a].votes minus the value sent in the transaction txj . we define the highest confirmed checkpoint as a function highestckpt(db) computed over all accounts in a database db. it represents the highest checkpoint ckptk (k ≤ i) which precedes the checkpoint votes of q fraction of accounts’ (weighted), i.e.∑ a∈db wvote∈ db[a].votes i(wvote.ckpt ≥ ckptk) · wvote.w ≥ q×db.stot the indicator function i(wvote.ckpt ≥ ckptk) takes value if the vote is for a checkpoint higher than k and consistent with ckptk , and otherwise takes value . the highest confirmed checkpoint of the database: highestckpt(db) is the one with the largest height that is supported by a fraction q of the total stake. validator-free full audit. to validate a confirmed checkpoint, as- sociated with a sequence of transactions t leading to it, a client performs the following steps: the client starts the validation pro- cess at the genesis state db , which needs to be known and to be authentic. the client then applies each transaction in the sequence in order, recomputes the state of the database and recomputes the highestckpt(·) function, according to votes in each account, to determine the confirmed checkpoint. the client accepts the highest confirmed checkpoint. we show that this audit process ensures safety, in the sense that the confirmed checkpoint returned is guaranteed to be consistent with the state of the honest chain (i.e. the chain of transactions built from the genesis state up to the confirmed checkpoint is guaranteed to be exactly the same as in the honest chain). however, it does not guarantee freshness, in the sense that the checkpoint returned may not be the highest one in existence. clients can query multiple sources, and pick the highest confirmed checkpoint. determining the latest confirmed checkpoint is especially impor- tant to determine the latest set of validators currently maintaining the consensus protocol. once the current set of validators is se- curely determined their (valid for some period) signatures can be used to track updates of the database state starting from the highest known confirmed checkpoint. validation of the chain built prior to the known confirmed checkpoint does not involve any checks of validators’ signatures. . delegation the basic winkle mechanism requires a mass of account holders to vote for a checkpoint to become the highest confirmed checkpoint. our experiments (see section ), suggest that many accounts within existing blockchains are dormant, and therefore would seldomly vote. in turn this leads to checkpoints being confirmed in months (up to three years for bitcoin), during which the system is vulner- able to validator old key compromise. to reduce this latency we introduce a simple delegation model. an account may be created with a ‘delegate’ field db[a].delegate referencing the address of another account ad with no delegate field. this indicates that the voting power of account a is dele- gated, and therefore contributes, to the weight of account ad . this mechanism only allows for a single level of delegation. accounts that delegate still need to include a vote for the latest block when they transact. when computing the highest confirmed checkpoint, only the most recent vote of either the account that delegates or its delegate is counted (i.e., the stake of accounts that delegate are counted only once, either with the delegate or independently if their transaction is more recent than the delegate’s vote). accounts that delegate may change their choice of delegate through a trans- action. when an account changes its delegate, it also includes a vote and thus our propagation rule stays the same. for security, we assume that honest users delegate to honest delegates only. note that winkle compared to the delegated proof-of-stake systems re- quires weaker trust assumptions from the clients. clients need to trust the delegates which simply confirm checkpoints and are not running consensus protocol between themselves, any client can act as a delegate in winkle and is not required to setup complex infrastructure in contrast to nodes running consensus. moreover, the number of delegates can be much higher than the number of validators and each user is free to delegate to itself if he/she does not trust other delegates. voting accounts act as pools, for accounts that delegate. when pools vote often on behalf of dormant accounts, we expect check- points to be confirmed faster. to further speedup check-pointing a system of economic incentives may be set up to encourage pools to vote often, as well as for other accounts to delegate to those that do—all while preventing over-concentration of stake. economics of pools & decentralization. a key issue with dele- gation is the tendency towards centralization: delegating votes to winkle: foiling long-range attacks in proof-of-stake systems new-york ’ , october – , , well operated pools is simpler for a user than voting themselves (which requires sending transactions). at a logical extreme, one pool may emerge with a large amount of voting power, which may later have its key compromised and used to perform long-range attacks. to disincentives such concentration of voting power we use crypto-economic ideas from [ ] and design an incentive scheme that should maintain close to a constant number of pools in the system. we define as u > a parameter to affect target number of pools we wish to incentivize in the system. each pool incurs some fixed cost for operating, which we denote as ei for pool i. ei represents the operational pool cost, since it must be online to execute trans- actions and vote in an epoch, as well as an additional fee each pool must provide the system per epoch to vote. we denote as si the weighted vote delegated to a pool i, and as s the total vote weight in winkle. for each pool i we define its incentive weight as wi = min(si, q·s u ). the value q is the fraction of the vote necessary to confirm a checkpoint. we assume that during an epoch of length t time units some monetary value is set aside to reward users and pools that vote to confirm this checkpoint. we define the rate of reward per unit time for each coin in an epoch as d. when a checkpoint is confirmed we observe which fraction of q votes contribute to its confirmation. we split and assign the total reward, of total value d ·t ·s, to each pool i proportionately to its incentive weight wi . taking into account the costs advertised by each pool, its ‘profit’ for an epoch it contributed to confirming is: ri = wi q ·s d ·t ·s −ei = wi q d ·t −ei however, if the pool did not contribute a vote to confirm the epoch, then the reward is zero – and for simplicity we assume that the cost ei for this epoch is also zero. assuming that a pool manages to participate in a fraction a′ of confirmations then its expected reward per epoch is: r̄i = a′ q wi ·d ·t −a ′ ·ei during each epoch any reward that is not distributed to pools is kept to reward future epochs (increasing future d values). we assume that each pool keeps a fee fi for itself, and then distributes the remaining ri − fi to the pool participants as an incentive to remain in the pool, according to their contribution in terms of weighted votes. if a pool does not contribute to the vote we assume it does not keep any fee or distributes any incentive to participants. we now determine a number of properties for the incentive scheme above. we assume that both pool operators and users are honest – in that they follow the winkle protocol correctly – but are however rational in their choice of pools. we therefore stress (in line with recent thinking [ , ]) that the incentive mechanism is there to protect honest users against perverse incentives creating a more brittle system, rather than argue that the incentive mechanism prevents malicious users from participating (such an argument is impossible without considering external incentives they may have to do so, with potentially unbounded rewards). equilibrium impliessi ≤ q·s/u . our first argument is that a user does not have incentive to participate in a pool with voting weight larger than q ·s/u , and in fact a pool operator also has no incentive to operate such a pool. this is a straight forward implication of the incentive weight wi being capped at q · s/u : any additional votes contributed to the pool have a zero marginal rate of return for the pool, and a negative rate of return for users (since distribution of the user incentives ri −fi is done per contribution to the pool with no cap). a rational user will always have incentives to defect from a pool with voting weight larger than q ·s/u to a pool with equivalent characteristics ei , fi and a ′ with a lower voting weight. therefore in an equilibrium allocation of user voting weights to pools it must be that si ≤ q · s/u . as a result for the remaining of our analysis we consider that wi = si ≤ a · s/u . the lack of incentive to operate larger pools is a key part for our argument that there will be about u/a pools in the system (the second part involves bounding the number from below). delay to confirm an epoch. for a pool to participate and vote it should expect at the very least a positive return. since a vote for a later block also counts as a vote for earlier ones, a pool may chose to vote seldomly to reap high rewards per action. we would instead like pools to vote on each block to ensure blocks are quickly confirmed: to generate a positive return on a single block it should hold that: a′ q wi ·d ·t −a ′ ·ei ≥ , therefore t ≥ q d · ei si . as the cost ei of participating as a pool increases, so does the minimum time a pool will wait before it votes in order to guarantee a positive return. on the other hand, the larger the voting power si (subject to the upper bound above) the lower the delay in voting to generate a return. this dynamic establishes a lower bound on the pool delay in voting. competitive pressures also bound the delay in voting from above. pools observe the concentration of voting power amongst pools (or their delay in voting). they have to compete to be within the fraction q of voting power to confirm a block in order to generate any returns. therefore no pool has an incentive to vote any later than the time a fraction q of voting power would have incentives to vote, since it would risk not being included in the reward for the confirmed blocks. characteristics of pools. let’s calltm the maximum time at which the fastest set of pools (in terms of ei and si above) comprising a fraction q of the voting power have incentives to vote. any pool that finds it unprofitable to participate before this time will be excluded from the rewards systematically, and therefore will receive a zero return (its value a′ will tend to zero). this establishes a couple of constraints in terms of the size of the pools and their costs to operate: assuming a pool expects to make a rate of return superior to the fee fi , it must hold that: a′ q si ·d ·tm −a ′ei ≥ fi this constrains the fee pools can charge in relation to their voting power and costs. obviously the higher the costs they incur the lower the fee; and the larger their voting power the higher the fees new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko they can charge. this dynamic creates an upwards pressure on the size of pools. in fact small pools are simply not viable subject to the constraint (by rewriting the bound above): si ≥ q a′ fi +a ′ei d ·tm even by setting a fee at fi = , and making no profit, small enough pools are not viable due to the fixed costs to participate ei . further, all other things being equal users always have incentives to delegate to larger pools, since those will make more profit and therefore provide a potentially higher rate of user incentive. this competitive pressure will ensure that we expect pools not only to be capped from above to si ≤ q ·s/u but also from below, leading the system to having s/q pools of roughly equal size at equilibrium. competitions for user voting power. pools have endogenous incentives to keep their costs ei down, since any surplus can be turned into a larger profit fi . therefore if a pool has the options to lower its cost it will. we assume that the system should require a minimal payment of e to participate as a pool per epoch to ensure those costs do not become zero. however, pools also compete for users, since it is user voting power that ultimately determines their voting power si . users can change their delegates, and we assume that they will move to pools offering larger user incentives in terms of better rates of return per vote weight delegated. the rate of return for a user delegating one vote to pool i is: ui = r̄i −a ′fi si = a′ q d ·t −a′ ( ei +fi si ) a pool can maintain an equal return to its users by either decreasing its fee fi or by attracting more users’ voting power to the pool. in a competitive setting users will change their delegation to pools offering higher returns. therefore at equilibrium, for two pools the rate of return per vote must be equal (ui = uj) for users to not switch. assuming two pools within the faction that confirms blocks with likelihood a′ = q. equality of rates of return reduces to the constraint: sj/si = (ej +fj)/(ei +fi), where each of the pools can control only its own profit f , and can try to attract more user votes s. attracting more user votes allows a pool to increase f , but the only tool available for doing so is increasing the rate of return per user vote which involves decreasing f (subject to fixed costs e). this puts a downward pressure on the fee f . finally, for two pools with equal user vote share s = si = sj , it holds that the difference in fees that they are able to charge is: fi − fj = (ei − ej). any fee differences are related to the difference in operational and pool participation costs. therefore if one pool defects from a cartel and charges lower fees, all others will have to charge low fees, subject to their respective costs to ensure they provide a comparable rate of return to users, and to avoid them defecting. this dynamic supports a competitive ecosystem of pools. in order to propose a pool, the leader must deposit some stake, which will be the basis of the voting power in that pool. given our assumption that the distribution of stake among stake holders is more decentralized than among validators (which we justify in section ), there cannot be one entity that controls an important number of pools (even though one stake holder could potentially control more than one pool). . minting and stake bleeding attacks another type of a long-range attack are stake bleeding attacks [ ]: an adversary, in a forking chain, may accumulate the rewards as- sociated with creation of new blocks in order to inflate its stake until it accumulates enough to confirm an inconsistent checkpoint. to protect against such attacks we require every minting event to take effect only after the block containing it gets confirmed as a checkpoint. different proof-of-stake blockchains use different reward and minting mechanisms, and some also contract the monetary supply. for example, there can be a minting key capable of creating new coins, this key does not have to stay honest and secure in perpe- tuity. since our mechanism guarantees that without minting an alternative forking block can not be accepted as a checkpoint, even if the old minting key is leaked to the adversary, the alternative minting transaction will never take an effect, as it will have to get checkpointed by the old money supply. the amount of money allowed to get minted or destroyed at a time does not have to be limited as long as the fraction of stake in hands of the adversary remains bounded as per our assumptions (see section ). . key rotation and account healing accounts are controlled by signature keys. key rotation operations shorten the lifetime of keys and prevent obsolete keys from issu- ing new transactions. however, this does not prevent a variant of long-range attacks on winkle: an adversary that gains access to an account’s old signature key that has been already rotated to a new key, may still use the stolen key to create past transactions, interfering with the safety of winkle’s audit. it is prudent to expect that any long term active account at some point may fail to protect a historic key. in the purely static compromise model this account would have to be considered under the control of the adversary for ever after, it is then likely that eventually the volume of stake under the control of such accounts would exceed any fractional threshold, threatening the security of winkle. therefore we adopt a more appropriate model where an adversary may compromise an account’s key, but after the key is rotated may lose control over the account. such an adversary may also compromise some old key that is not being currently active. an account holder a may include a special key rotation transaction t within block bi , updating the public key associated with the account to pk′. the transaction is signed with the public key associated with the account (namely dbi[a].pk). following the transaction t being applied, the database associates a new public key with account a. we show, in our proof, that winkle benefits from key rotations. in cases when some historic account key is compromised, but subse- quent keys included in a confirmed checkpoint are not, the account does not have to be considered as contributing to the voting weight that the adversary commands to mount a long-range attack there- after. this provides a forward security property, which we call account healing. winkle: foiling long-range attacks in proof-of-stake systems new-york ’ , october – , , security assumptions eventually compromised validators. in our model, we assume that all validators are eventually compromised, and share their keys with the adversary after a confirmed checkpoint has been generated by winkle. (in section we evaluate how long each block awaits to become a confirmed checkpoint). the honest chain or database is created by validators while they are honest and an adversarial chain is created by the adversary after the keys have been leaked. without loss of generality we aggregate all validators into a single validator and we let the concrete instantiation of the consensus protocol to determine the number of validators, the exact form of keys, signatures and the conditions on their validity. honesty of accounts. winkle leverages honest account holders for security. we assume account holders are harder to compromise than validators because there are more numerous. for example in bitcoin, according to our experiments, as of september there are accounts that hold one third of the total stake (see fig. ) compared to four miners that control more than half of the hashrate [ ]. accounts are of four types, and their type may change over time: - active and honest (fa) accounts are connected to the true chain, they receive the latest checkpoint before the next checkpoint be- comes available, - byzantine (fb) accounts share keys with the adversary and arbitrarily deviate from the protocol, - eclipsed (fe) accounts get eclipsed from the honest chain during the long-range attack and may transact on the adversarial chain while the adversary does not control their keys, - dead (fd) accounts that do not transact during the attack and stay idle. each account belongs to only one category, therefore the fractions representing each category add up to one fa + fb + fd + fe = . those fractions are calculated at the boundary of the blocks, i.e. for any checkpoint ckpt we consider the corresponding state of the database db and look into the fraction of accounts of each type in db. as explained later, the fractions are weighted by the amount of stake the accounts hold in database state db. winkle operates by accounts voting for checkpoints, which are commitments to blocks in the vbcr. denote by q the fractional voting power (the quorum) required to create a checkpoint. for safety of the system, the eclipsed accounts and the adversarial accounts should not constitute enough voting power to create a forking chain: fe + fb < q. for liveness byzantine and active users together should constitute a quorum: fa + fb ≥ q (we assume byzantine nodes participate in the honest chain but also equivocate to participate in an adversarial fork). the three equations above can be satisfied in multiple ways, giving a trade-off between liveness and safety of the system. at a high level we want to minimize the voting power needed to advance the checkpoint: q → min and we want to maximize the number of byzantine and/or eclipsable nodes that the system can tolerate: fb, fe → max. we now discuss different solutions to this system. typical bft. in bft systems typically the trade-offs are chosen in the following way: q = / , fa = / +δ, fb = / −δ, fe = / (where δ represents a single user’s fraction). more precisely: n = f + : q = ( f + )/n ; fa = (f + )/n ; fb = fe = f /n ; fd = . note that this solution also leads to quorum intersection, where any two q quorums are guaranteed to intersect at some honest node. the solution also maximizes the portion of eclipsable users. generic trade-off. the bft assumption is very strong: the adver- sary needs to compromise / of all the coin holders and eclipse / of coin holders from the network and make them believe the adversarial fork. it is based on the assumption that the network is adversarial and may separate honest users. in practice, it is quite unlikely that a user will connect to the chain through a fully adver- sarial network, and account holders are far more likely to query multiple validators about the state of the chain. so it is far more difficult to eclipse the accounts from the network [ ], therefore we can lower the bound on the eclipsed accounts and get a generic trade-off between the number of byzantine users and the quorum size as illustrated by the yellow-filled area on fig. . the selected points illustrate: typical bft solution (point “bft”, fig. ); the relaxed requirement on the quorum size: fb = / −δ, q = / , fa = / +δ, fe = / (point # , fig. ); and the increased resilience to byzantine users (assuming those are not incentivised to break liveness): fb = / −δ, q = / , fa = / +δ, fe ≈ negl (point # , fig. ). weighting accounts by value. to prevent sybil attacks, where an adversary creates many empty accounts in order to gain control over more than fb fraction of accounts at no cost, we weight ac- counts by the amount of coins that they hold — which we call stake. for the rest of the paper when we talk about accounts’ fractions (e.g. fb, q), we implicitly assume that those accounts are weighted. fig. illustrates the number of keys protecting a given fractions of stake for both bitcoin and ethereum blockchains: / of stake in bitcoin is protected by , keys and same fraction of stake for ethereum is protected by keys (snapshot taken on - - ). note that the flexibility of the assumption may be used to enlarge the bound on byzantine stake fb thus increasing the number of keys required to be compromised in order to mount the long-range attack. limiting transfers. due to transactions being processed the set of users and their stake is in constant flow. the adversary can potentially manipulate this flow in its fork and accumulate more than fb stake in its chain, for example, by including all transactions where it is the recipient but no transactions where it is the sender of coins. to mitigate this risk we additionally assume that at any state of the honest database the adversary controls at most fy of accounts and that the amount of stake moving during any given block is at most fx of the total amount of stake in the system. we then choose fy + fx/ = fb to make sure that the adversary can not accumulate more than fb coins in his fork (see section (lemma . )). for example we could choose fy = / and fx = / or fy = / and fx = / depending on the power we wish to give to the adversary. for reference, based on measurements on ethereum, one sixth of the money moves in roughly a week ( days in bitcoin) and one third in two weeks (one week for bitcoin). this is consistent with the block time frames we consider. new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko figure : generic trade-off between the fraction of byzantine users and the quorum size. figure : the number of keys holding a given fraction of stake in logarithmic scale. minting. note that the adversary is only allowed to control fys stake, where s is the total amount of stake in the system. but once the minting transaction takes an effect and increases the amount of stake to s′ = s + m, the adversary is allowed to control more stake: fys ′ > fys. in our mechanism we require that minting happens only after the minting intent gets checkpointed: suppose that the minting intent was issued in block bi and the block bi got checkpointed in block bj , where j > i, then the total stake increases from s to s′ starting in block bj+ . this means that by the end of block bj , in state dbj , the adversary is still only allowed to hold at most fys stake compromised. delegation. lastly, our protocol allows for the delegation of stake. in this case, we assume that honest users always delegate to hon- est pools. this strong assumption is necessary to ensure that no adversarial pool leader can control more than fy fraction of stake in a block. additionally, let’s recall that since winkle depends on the difficulty of compromising coin-holders, we need to ensure that enough pools are created (this is done using an appropriate delegation scheme as shown in our analysis in section . ). security definition and argument we provide a game-based definition for the security of winkle that closely mimics the interactive nature of the attacks. without losing generality we assume that there is a single validator, and that they use a fresh key to sign each new block. the adversary may adaptively compromise the users’ accounts even retrospectively, but can not hold more than fy of accounts’ keys (and stake) at any given block, we define this requirement formally below. the adversary can submit transactions to be included in the next block and if the next block advances the confirmed checkpoint, then the adversary gets all the validator’s keys prior to this newly confirmed checkpoint. the adversary wins if it outputs a forking chain of blocks, whose tip, b∗, generates a confirmed checkpoint inconsistent with the honest chain. we now put the game into more formal terms. definition . . a validator based consensus with reconfiguration vbcr is secure against eventually compromised validators if for any genesis state b and for any polynomially bounded challenger who holds all the secret keys in the system and outputs valid answers for adversarial queries, for any probabilistic polynomial-time adversary a , there is a negligible function ν(λ) such that pr(expvbcr,a(λ)= ) ≤ ν(λ), for λ ∈n. the experiment expvbcr,a(λ) is defined as the interaction with a challenger, where the challenger is comprised of two algorithms: key retrieval getkey and next block creation nextblock, these are stateful algorithms that implicitly take the chain built up to this point as an input. getkey(a,i,b) outputs the account key: let dbi be a state of the database built using the first i blocks of b, sk ← dbi[a].sk, q = q ∪{(i,a,sk)}, output sk. nextblock(t∗i ,b) creates and outputs the next block and the secret keys of the validators used prior to the highest confirmed checkpoint: let the next block include adversarial transactions bi ← [hi− ,ti ||t∗i ,σ], output bi and the validator’s keys:{skt ∆. let’s write r = ∆ + δ for some δ > . in db the adversary cannot hold more than f and thus we have: (f − ∆)+ (∆ + δ) − s ≤ f . which in turns implies δ −s ≤ , adding (∆ +δ) to both parts equivalently get (*) ∆+ δ ≤ s+∆+δ. since no more than p stake can move in t, we have r+s ≤ p, therefore (**) ∆ +δ +s ≤ p. inequalities (*) and (**) gives us that ∆ + δ ≤ p or equivalently δ ≤ p−∆ (***). now going to maxamount, we have: maxamount = f −∆+∆+δ = f +δ ≤ f + p−∆ ≤ f + p by (***). and hence maxamount ≤ f +p/ , therefore the adversary cannot obtain more than f + p/ in its sub-sequence of transactions t∗. □ proof of theorem . . suppose that the adversary has won the game and successfully produced a forgery b∗, s.t. ckpt∗ := highestckpt(b∗) is inconsistent with bi . denote the latest com- mon ancestor block of b∗ and bi as bparent. the adversary has the validator’s key for bparent to sign the conflicting descendant block creating a fork. by definition of the checkpoints at least q of weighted accounts in the adversarial chain has voted for ckpt∗ or some later one. we prove that this event contradicts at least one of our assumptions. we do so by first considering a very simple variant of our system (case ) and incrementally adding additional features (i.e. key rotations, minting, delegation) (case - ). recall some of the important details of winkle’s design described in sections : - assumption : more than − fy keys are safe in the beginning of the block (the key is safe if all the keys under that account are not compromised within the block), - assumption : at most fx stake moves within a block, - assumption : every transaction is included in some honest block only if it votes for the latest checkpoint (the previous most recent block). we also assume that fy + fx/ = fb. case . we first consider a simplified variant of our system with no key rotations (accounts never rotate keys and new account are not created), no delegation (no accounts have delegates), no minting (new coins are never minted, the only coins in the system are those created in b ). denote the first pair of diverging blocks bparent+ and b∗ parent+ . by design of the system (assumption ), transactions can be se- quenced in the block only if these transactions vote on the previ- ous immediate block, thus only a subset of transactions from non- adversarial accounts in bparent+ can be replayed in the fork as only those sign bparent as a checkpoint, moreover these transactions can only get placed in the block b∗ parent+ . no other transactions from the honest chain can be replayed in the adversarial fork. having that under assumption the adversary can control at most fy keys and eclipse at most fe keys and by assumption in block bparent+ at most fx of stake moves, by applying lemma . , we get that there are no subset of transactions of the honest block bparent+ such that if they are applied to dbparent more than fe + fy + fx/ = fe + fb of weight gets concentrated jointly under the compromised and eclipsed accounts in the resulting state of the database. thus since fe + fb < q the adversary will be able to get more than q fraction of stake and thus will not be able to create a new checkpoint on a forking chain. therefore the forgery is not possible and more- over the eclipsed users can distinguish an adversarial chain from a true chain by the fact that new blocks do not become confirmed checkpoints on an adversarial chain. case : case + key rotation. we now prove the theorem in the presence of key rotations. let’s recall that our assumption in that case is that − fy of the accounts are safe between two checkpoints, meaning that none of the keys from those accounts are ever leaked to the adversary during any given block. let’s assume that the adversary has corrupted fy accounts within block bparent+ that we note (a , · · · ,ay). to corrupt new account, (a′ , · · · ,a′y′), account a ′ i has to perform a key rotation operation in block bparent+ otherwise this account would be considered unsafe in block bparent+ as well, making the number of unsafe accounts go above the allowed threshold fy . due to assumption , the key rotation for account a′i will include a vote for bparent+ , making it impossible to replay this key rotation transaction in the adversarial new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko b b . . . bparent bparent+ . . . bi b∗ parent+ . . . b∗ figure : illustration of an adversarial fork in an attempt to perform a long-range attack fork. with a similar argument, accounts (a , · · · ,ay) have to per- form a key rotation in block bparent+ (otherwise these accounts would not be safe in block bparent+ ). this ensures the adversary cannot control more than fy in any block since it cannot accumu- late corrupted accounts from multiple blocks (see figure for a visual illustration). in block bparent+ the adversary has the choice of using the new corrupted keys (a′ , · · · ,a′y′) which include the key rotation of accounts (a , · · · ,ay)or ommit all the key rotations and keep account (a , · · · ,ay) corrupted in the adversarial chain but none of the (a′ , · · · ,a′y′) accounts. in both cases the same ar- gument as in lemma . can be made and the maximum amount that the adversary can now hold is still fy +p/ as the conditions on money spent or received from either accounts are similar. the proof from case is thus still valid. [pka → pka ,b ]σa . . . [pkb → pkb ,b ]σb . . . block b block b figure : the adversary that knowspka andpkb cannot use both keys in its forking chain because the second key rota- tion includes a commitment to block b which include the first key rotation, healing the compromised key. case : case + delegation. we now prove the result in the case where delegation is enabled. as a reminder, in the case of dele- gation, we assume that honest users delegate to honest delegates (assumption ). additionally, we require that each delegate is active (as they are incentivized to be). the adversary cannot control more than fy in the honest chain by assumption and and cannot receive more than fx/ of the coins by a similar argument as in lemma . . the main difference between the previous case is that now, eclipsed users may vote on the adversarial chain, but have their delegate vote on the honest chain and thus equivocate. even in that case, under the assumption that no more than fe users are eclipsed, we still have that the adversary can get at most fb + fe < q of the vote in its chain. similarly as in case , if the adversary was to corrupt a previously honest delegates, then they will have to make a key rotation that includes a vote for the latest checkpoint, preventing the adversary, as before, from accumulating stake over many blocks. case : case + minting. we now prove that an adversary cannot obtain more than fy of the stake in its chain even when money can be minted. as explained in section , the minting transaction is only effective once the block that contains it has been checkpointed. we will thus refer to the mint transaction as a mint-intent transaction. if the minting key has been leaked to the adversary, the adversary can include in its chain a mint intent that creates money for itself. however as per cases - of this proof the adversary cannot check- point the alternative minting intent using the old stake. though the adversary can not create a minting transaction of its own it can potentially take advantage of omitting the minting transaction which we explore next. note that the adversary is only allowed to control fys stake, where s is the total amount of stake in the system. but once the minting transaction takes an effect and increases the amount of stake to s′ = s +m, the adversary is allowed to control more stake: fys ′ > fys. we now show that the adversary can not leverage this fact, omit the minting transaction and hold compromised more than fys of stake in its forking chain. indeed, suppose that the minting intent was issued in block bi and the block bi got checkpointed in block bj , where j > i. if the adversary tries to omit the checkpointing of the minting intent, it has to modify block bj or an earlier one and therefore can not include any of the transactions from the subsequent blocks b>j of the honest chain, but only those later transactions (e.g. key rotations) actually allow the adversary to compromise more stake, if the adversary can not leverage those, it can not accumulate more than fys stake. in more details, after the minting intent got checkpointed the adversary can either keep the same accounts compromised (and receive more money on it) or compromise new accounts. in the first case, the additional stake that the adversary receives in order to own more than fys of the stake must happen after bj and thus, by assumption , this stake includes a vote for checkpoint bj . this bounds the adversary to mint new coins. thus if the adversary wants to increase its stake it has to include the minting transac- tions and cannot inflate its relative stake. similarly, if the adversary compromises new accounts after bj , then by assumption these new accounts need to be safe at state bj and thus have operated a key rotation after checkpoint bj . this is because otherwise these ac- counts would be unsafe in the previous block which contradicts our assumption. thus these new accounts that the adversary compro- mises also have to include a vote for bj by a similar argument as before. therefore, all the new stake that the adversary gets past the block bj has to include a vote for bj and therefore the adversary has to include the minting intent and the minting transaction to keep a valid chain. □ evaluation of checkpointing delay the key barrier to reducing winkle’s checkpoint confirmation delay is users’ idleness, which is encouraged within some cryptocurrency communities . we estimate the checkpoint confirmation delay (the a practice known as hodling https://en.wikipedia.org/wiki/hodl https://en.wikipedia.org/wiki/hodl winkle: foiling long-range attacks in proof-of-stake systems new-york ’ , october – , , finality time) of winkle in the “real world”, using workloads from the most popular cryptocurrency projects namely ethereum and bitcoin. we pre-process the transactions graph for these projects by assuming that each transaction in a block votes for the block that precedes it; i.e., if a transaction appears in block bi , we consider that it votes for block bi− . for each block we then retroactively determine the confirmed checkpoint as the block that has just received q fraction of the votes. for our simulations we choose q = / in line with the usual bft assumptions. the finality time shows how long the block awaits to get checkpointed by q fraction of stake, i.e. the time validators need to stay honest. we use google bigquery [ ] on the ethereum and the bitcoin blockchains to perform simulations. we build a database with all addresses (or accounts) and the date of their last transaction, which corresponds to their latest vote, votesent. the value under the account right after the last sent transaction is counted towards the vote votesent. we also list transactions received per account after the last sent transaction, we apply the propagation rule (see sec. . ) and append them to the list of the account’s votes. this gives us a global list of weighted votes. we rank the weighted votes by decreasing checkpoint’s vote (i.e., most recent one first) and retroactively determine the highest confirmed checkpoint. fig. plots the confirmed checkpoint delays for databases at different times. for ethereum a checkpoint takes between days and up to a year to be confirmed (with mean days), and for bitcoin a checkpoint takes between months and up to three years to be confirmed (with mean days). delegation. as shown in fig. , winkle without delegation would lead to finality times of months or years. we simulate simple del- egation to estimate the improvement in the finality time. in our delegation scheme, we incentivize the creation of k pools [ ] of roughly equal weight. to average out monthly fluctuations, we take measurements, plot the mean as a line and standard deviation as error bars on fig. . for each month between sep’ and jan’ we choose delegates as the top k accounts that have been the most frequently transacting during that month. more precisely, these are the accounts whose maximum delay between two consecutive transactions in the given month is the smallest. we assume those as our delegates with equal voting power. we retroactively compute the latest checkpoint confirmed on the last day of the month, by taking the vote from the two third of delegates who sent the newest vote. the results, depending on the number of delegates k between a few and one million are presented in fig. . to better understand the x-axis scale, note the number of transacting ethereum accounts between sep’ and aug’ is million and transacting distinct bitcoin addresses is million. as expected, the delegation scheme drastically reduces finality time: for one thousand delegates the finality time is about hours, for thousand delegates it is hours, and for thousand delegates it is . days. related work an abstract vbcr can be instantiated through a byzantine agree- ment protocol such as pbft [ ], or hotstuff [ ]. systems such as tendermint [ ], sbft [ ], and libra [ ] implement such mecha- nisms and we have designed winkle to be relevant to those designs. long-range attacks do not occur in nakamoto consensus based figure : checkpoint confirmation delay without delegates figure : checkpoint confirmation delay with different num- ber of delegates protocols such as bitcoin [ ] or ethereum [ ] that rely on proof- of-work, instead of validator signatures. however, those only offer probabilistic finality and winkle may act as a finality layer. previous approaches to defeating long-range attacks. two ap- proaches have been proposed in the literature to foil long-range attacks, besides using secure hardware [ ]. software checkpoint- ing [ ] periodically includes in client software a ‘checkpoint’. clients never accept any past contradicting blocks. this relies on centrally trusting the software distribution process and can be modeled as a vbcr system, secured by software developers’ keys. these keys leaking to the adversary allows for long-range attacks. the second approach is based on validators locking deposits [ , ] that are not returned in case of equivocation. it is difficult to de- termine how long the deposits should be locked, after a validator becomes inactive. and locking deposits for any fixed time cannot prevent validators from losing their key after that period suffering no penalty. new-york ’ , october – , , sarah azouvi, george danezis, and valeria nikolaenko winkle can be interpreted as a refinement of those two tech- niques: it allows all to determine appropriate checkpoints in a highly decentralized manner, and allows validators that were honest to recover their deposits after their blocks become confirmed through a checkpoint. background and related techniques. the core of winkle is based on confirming checkpoints using a simplified byzantine consistent broadcast [ ] based on stake. this primitive is always safe but may not lead to a checkpoint in case the initiator (the vbcr in our case) provides two equivocating checkpoints — hence we assume that validators are only eventually compromized. winkle borrows ideas from proof-of-stake systems proposed as early as in the context of the ppcoin [ ], and then snow white [ ] and ouroboros [ ]. winkle makes similar security assumptions, namely that a fraction of the stake is controlled by honest clients. however, winkle is not a consensus mechanism but rather a finality layer, similar to casper [ ]. fantômette [ ] and afgjort [ ] are finality layers that resemble winkle; they rely however on validators rather than general users to provide finality. ouroboros genesis [ ] has a finality layer that allows new parties to bootstrap the correct blockchain without checkpointing. a number of blockchains employ client-led validation. in iota [ ] clients include references to multiple past transactions. consensus is secure if honest clients make more transactions than dishonest ones. eos consensus [ ] is based on clients delegating stake to one of a set of validators, and uses ‘transactions as proof of stake (tapos)’ similar to winkle. winkle differs in that it is a finality layer immune to long-range attacks, rather than a full consensus protocol. discussion winkle provides a finality layer based on account stake. other proof- of-stake protocols [ , , ] allow for more validators than vbcr protocols. a natural question to ask is how using winkle on top of a vbcr protocol compare to these. first, even though these protocols are open to every coin-holder for participation, they require parties to run sophisticated interactive computations. some protocols may require participants to lock some of their funds for the duration of the protocol (or more) and in some cases, a substantial amount of stake must be acquired in order to participate (e.g., ethereum proof-of-stake protocol would require ether = $ , ). these protocols thus still suffer from a relatively small committee size compared to their user-base and would benefit from additional security against long-range attacks. furthermore, using winkle on top of a vbcr protocol provides instant confirmation under the assumption that validators stay honest for at least finality (as defined in this paper) with a smaller committee and hence better efficiency. snow-white [ ] and ouroboros [ ] confirmation times are quite long (on the order of magnitude of hundreds of blocks) hence a vbcr protocol with winkle provides much better latency. winkle was inspired by the proof-of-stake literature, but it is very different from running another proof-of-stake protocol as a finality layer to confirm checkpoints. winkle does not require additional participants to run a second-layer protocol, instead winkle relies https://www.exodus.io/blog/ethereum-proof-of-stake-date/ on existing non-interactive users that submit transactions in an ordinary (one-way) fashion and only requires minimal changes to the vbcr protocol itself, it is therefore much simpler that the proof- of-stake protocols (it requires no leader elections, no complicated cryptography, like cryptographic sortitions [ ], no multiple rounds of voting and no separate special committee). limitations. one limitation of winkle is that a client needs to be aware of the latest state of the database, although this could be solved by using some lighter client such as spv . furthermore, although winkle’s security relies on a flexible set of assumptions it remains an open problem to decide how to adjust the thresholds dynamically. finally, when discussing our security assumptions in section , we relied on the number of accounts used in bitcoin and ethereum. we recognize that one user may hold multiple accounts to achieve more privacy [ ] and the number of users might be a better parameter for our assumptions. but since this number is unknown and can only be speculated, we rely on the number of keys. extension. winkle may also be adapted to provide finality to proof-of-work based blockchains, that otherwise can only achieve probabilistic finality. in such systems blocks are generated through a proof-of-work and a fork choice rule that privileges forks with the most work. clients vote for the fork they consider authoritative, after sufficient time to constitute an epoch and to ensure they are likely to be correct. this means clients will be voting for a block that has been confirmed in the proof-of-work sense (i.e., after x blocks have been mined on top of it). once q of stake confirms a checkpoint all clients accept it, and no matter how much work an adversary commits in the future it may never be reverted. it is worth noting that in this case, the set of assumptions must change as proof-of-work requires a majority of the computational power to be honest (as opposed to validators) and winkle would still require a majority of the coin-holders to be honest. other protocols have been proposed to achieve finality in proof- of-work systems. karakostas et al. [ ] for example proposed check- pointing mechanisms on top of proof-of-work, but they require a set of additional trusted parties, mitigating byzantine faults among those parties brings significant complexity and interaction to the protocol. bissias and levine [ ] proposed an approach to stabilize the block interval time, keller and böhme [ ] built on their work and designed a proof-of-work blockchain with a novel puzzle and a quorum mechanism that brings finality. the solution looks promis- ing but requires a different protocol back-bone which makes it hard to apply to existing blockchains. lastly, duong et al. [ ] followed by chepurnoy et al. [ ] proposed a blockchain that combines the proof-of-work and the proof-of-stake mechanisms, there in case more than half of the compute power comes to the adversary, the honest stake may still be able to protect the system. they claim to improve scalability of proof-of-stake systems, however the proof- of-stake layer requires active participation from the players, while being registered to participate they need to stay alert, this is very different from winkle that does not require stake holders to commit to staying online. https://en.bitcoinwiki.org/wiki/simplified_payment_verification https://www.exodus.io/blog/ethereum-proof-of-stake-date/ https://en.bitcoinwiki.org/wiki/simplified_payment_verification winkle: foiling long-range attacks in proof-of-stake systems new-york ’ , october – , , additionally to proof-of-work, winkle could be used as a finality layer to other proof-of-x-type protocols. for example, proof-of- space such as filecoin or chia that are potentially vulnerable to long-range attacks (e.g., some forms of stake-bleeding) unlike proof-of-work cryptocurrencies. a full proof-of-stake system may be bootstrapped through win- kle. clients indicate on-chain whether they want to act as validators. at each block, some of them are selected that represent the most stake in the system. their stake at this point is locked, and they act as validators to produce a block. once the stake has shifted significantly, or after some number of blocks, they are rotated and a new set is selected. stake is released once the last checkpoint they facilitated is confirmed by a fraction q of the stake as per winkle. conclusions proof-of-stake systems based on validators are a promising way to scale up blockchain, but are based on fundamentally different security assumptions to proof-of-work. they are susceptible to validators eventually becoming compromised, and to a long-range attack. winkle provides a decentralized approach for clients to de- termine and validate checkpoints, that rests only on the short term honesty of validators, and the longer term honesty of a more nu- merous set of all the stake holders. we show that using delegation, and assuming usage similar to bitcoin or ethereum, winkle check- points with a delay of hours or a few days. thus validators need only to keep their signature keys safe for this short window of time, assuming validators frequently rotate the keys. a stake locking mechanism incentivizing validators to stay honest can unlock stake after a confirmed checkpoint giving tolerable delays. finally, winkle as presented, has a key shortcoming: we assume that an honest account votes for the latest checkpoint. however, this creates a boundary race condition between blocks; it represents a challenge for cold wallets; and it makes transactions in mempools invalid between blocks. we took a pragmatic approach and assumed those issues are alleviated by blocks representing periods that are relatively long — in the order of a day or more. however, a key open research question relates to relaxing this condition, while preserving the security and simplicity of the winkle scheme. acknowledgements. the authors would like to thank ben mau- rer (facebook novi) for suggesting naming the system after the popular story of rip van winkle. in contemporary times he would have fallen asleep and missed the cryptocurrency boom, and upon waking up would have to do a full chain validation, subject to long range attacks, to catch up with the world. the authors would also like to thank kostas chalkias for useful comments on the idea behind the project. sarah azouvi worked on winkle while being a research intern with facebook novi (previously calibra). references [ ] s. azouvi, p. mccorry, and s. meiklejohn. betting on blockchain consensus with fantomette. arxiv preprint arxiv: . , . [ ] sarah azouvi and alexander hicks. sok: tools for game theoretic models of security for cryptocurrencies. arxiv preprint arxiv: . , . https://filecoin.io/ https://www.chia.net/ [ ] c. badertscher, p. gaži, a. kiayias, a. russell, and v. zikas. ouroboros genesis: composable proof-of-stake blockchains with dynamic availability. in acm ccs, . [ ] george bissias and brian n levine. bobtail: improved blockchain security with low-variance mining. in isoc ndss, . [ ] bitcoin pools. https://www.blockchain.com/pools. accessed: - - . [ ] l. brünjes, a. kiayias, e. koutsoupias, and a. stouka. reward sharing schemes for stake pools. arxiv preprint arxiv: . , . [ ] e. buchman. tendermint: byzantine fault tolerance in the age of blockchains. phd thesis, . [ ] v. buterin. slasher: a punitive proof-of-stake algorithm. ethereum blog url: https://blog.ethereum.org/ / / / slasher-a-punitive-proof-of-stake-algorithm/, . [ ] v. buterin and v. griffith. casper the friendly finality gadget. arxiv preprint arxiv: . , . [ ] p. daian, r. pass, and e. shi. snow white: robustly reconfigurable consensus and applications to provably secure proof of stake. in international conference on financial cryptography and data security. springer, . [ ] evangelos deirmentzoglou, georgios papakyriakopoulos, and constantinos pat- sakis. a survey on long-range attacks for proof of stake protocols. ieee access, : – , . [ ] tuyet duong, alexander chepurnoy, lei fan, and hong-sheng zhou. twinscoin: a cryptocurrency via proof-of-work and proof-of-stake. in proceedings of the nd acm workshop on blockchains, cryptocurrencies, and contracts, pages – , . [ ] tuyet duong, lei fan, and hong-sheng zhou. -hop blockchain: combining proof-of-work and proof-of-stake securely. cryptology eprint archive, report / , . [ ] eos.io technical white paper v . https://github.com/eosio/documentation/blob/ master/technicalwhitepaper.md. [ ] replay attack protection: include blocklimit and blockhash in each transaction. https://github.com/ethereum/eips/issues/ . [ ] bryan ford and rainer böhme. rationality is self-defeating in permissionless systems. arxiv preprint arxiv: . , . [ ] p. gaži, a. kiayias, and a. russell. stake-bleeding attacks on proof-of-stake blockchains. in crypto valley conference on blockchain technology (cvcbt). ieee, . [ ] yossi gilad, rotem hemo, silvio micali, georgios vlachos, and nickolai zeldovich. algorand: scaling byzantine agreements for cryptocurrencies. in proceedings of the th symposium on operating systems principles, pages – , . [ ] google bigquery. https://console.cloud.google.com/bigquery. [ ] g. gueta, i. abraham, s. grossman, d. malkhi, b. pinkas, m. reiter, d. seredinschi, o. tamir, and a. tomescu. sbft: a scalable and decentralized trust infrastructure. in th annual ieee/ifip international conference on dependable systems and networks (dsn). ieee, . [ ] d. imbs and m. raynal. simple and efficient reliable broadcast in the presence of byzantine processes. arxiv preprint arxiv: . , . [ ] dimitris karakostas and aggelos kiayias. securing proof-of-work ledgers via checkpointing. cryptology eprint archive, report / , . https://eprint. iacr.org/ / . [ ] patrik keller and rainer böhme. hotpow: finality from proof-of-work quorums. arxiv preprint arxiv: . , . [ ] a. kiayias, a. russell, b. david, and r. oliynykov. ouroboros: a provably secure proof-of-stake blockchain protocol. in crypto’ . springer, . [ ] s. king and s. nadal. ppcoin: peer-to-peer crypto-currency with proof-of-stake. https://www.peercoin.net/whitepapers/peercoin-paper.pdf, . [ ] w. li, s. andreina, j. bohli, and g. karame. securing proof-of-stake blockchain protocols. in data privacy management, cryptocurrencies and blockchain tech- nology. springer, . [ ] b. magri, c. matt, j. nielsen, and d. tschudi. afgjort–a semi-synchronous finality layer for blockchains. . [ ] sarah meiklejohn, marjori pomarole, grant jordan, kirill levchenko, damon mc- coy, geoffrey m voelker, and stefan savage. a fistful of bitcoins: characterizing payments among men with no names. in proceedings of the conference on internet measurement conference, pages – , . [ ] c. miguel and l. barbara. practical byzantine fault tolerance. in osdi, . [ ] satoshi nakamoto. bitcoin: a peer-to-peer electronic cash system. https: //bitcoin.org/bitcoin.pdf, . [ ] s. popov. the tangle. https://iota.org/iota_whitepaper.pdf. [ ] a. singh, t. ngan, p. druschel, and d. s. wallach. eclipse attacks on overlay networks: threats and defenses. in ieee infocom’ , . [ ] the librabft team. state machine replication in the libra blockchain. https: //developers.libra.org/, . [ ] g. wood. ethereum: a secure decentralised generalised transaction ledger eip- revision. http://gavwood.com/paper.pdf, . [ ] m. yin, d. malkhi, m. k reiter, g. g. golan, and a. ittai. hotstuff: bft consensus in the lens of blockchain. arxiv preprint arxiv: . , . [ ] v. zamfir. introducing casper “the friendly ghost”. ethereum blog url: https: //blog.ethereum.org/ / / /introducing-casper-friendly-ghost, . https://filecoin.io/ https://www.chia.net/ https://www.blockchain.com/pools https://blog.ethereum.org/ / / /slasher-a-punitive-proof-of-stake-algorithm/ https://blog.ethereum.org/ / / /slasher-a-punitive-proof-of-stake-algorithm/ https://github.com/eosio/documentation/blob/master/technicalwhitepaper.md https://github.com/eosio/documentation/blob/master/technicalwhitepaper.md https://github.com/ethereum/eips/issues/ https://console.cloud.google.com/bigquery https://eprint.iacr.org/ / https://eprint.iacr.org/ / https://www.peercoin.net/whitepapers/peercoin-paper.pdf https://bitcoin.org/bitcoin.pdf https://bitcoin.org/bitcoin.pdf https://iota.org/iota_whitepaper.pdf https://developers.libra.org/ https://developers.libra.org/ http://gavwood.com/paper.pdf https://blog.ethereum.org/ / / /introducing-casper-friendly-ghost https://blog.ethereum.org/ / / /introducing-casper-friendly-ghost abstract introduction background and research question the winkle mechanism . basic winkle scheme . delegation . minting and stake bleeding attacks . key rotation and account healing security assumptions security definition and argument evaluation of checkpointing delay related work discussion conclusions references github - mjordan/islandora_workbench: a command-line tool for managing content in an islandora repository skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} mjordan / islandora_workbench notifications star fork a command-line tool for managing content in an islandora repository unlicense license stars forks star notifications code issues pull requests actions projects wiki security insights more code issues pull requests actions projects wiki security insights main switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branches tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit   git stats commits files permalink failed to load latest commit information. type name latest commit message commit time .github     docs/images     input_data     scripts     tests     .gitignore     license     readme.md     add_media.yml     autogen_content.yml     create.yml     create_from_files.yml     delete.yml     delete_media.yml     entity_post_task.yml     google_spreadsheet.yml     setup.py     update.yml     workbench     workbench_fields.py     workbench_utils.py     view code islandora workbench features documentation post-merge hook script current maintainer contributing contributing to documentation license readme.md islandora workbench a command-line tool that allows creation, updating, and deletion of islandora content from csv data. islandora workbench is an alternative to using drupal's built-in migrate tools for ingesting islandora content from csv files. unlike the migrate tools, islandora workbench can be run anywhere - it does not need to run on the islandora server. the migrate tools, however, are much more flexible than islandora workbench, and can be extended using plugins in ways that workbench cannot. note that this tool is not related in any way to the drupal contrib module called workbench. features allows creation of islandora nodes and media, updating of nodes, and deletion of nodes and media from csv files allows creation of paged/compound content can run from anywhere - it communicates with drupal via http interfaces provides robust data validation functionality supports a variety of drupal entity field types (text, integer, term reference, typed relation, geolocation) can provide a csv file template based on drupal content type can use a google sheet or an excel file instead of a local csv file as input allows assignment of drupal vocabulary terms using term ids, term names, or term uris allows creation of new taxonomy terms from csv field data allows the assignment of url aliases allows adding alt text to images supports transmission fixity auditing for media files cross platform (written in python, tested on linux, mac, and windows) well tested well documented provides both sensible default configuration values and rich configuation options for power users a companion project under development, islandora workbench desktop, will add a graphical user interface that enables users not familiar or comfortable with the command line to use workbench. documentation complete documentation is available. post-merge hook script islandora workbench requires the islandora workbench integration drupal module, and it is important to keep workbench and the integration module in sync. when you pull in updates to this git repo, the following script will check the repo's log and if it finds the word "module" in the commit message of the last three commits, it will print the message "note: make sure you are running the latest version of the islandora workbench integration module." this script will also tell you if you need to run python's setup.py script to install newly added libraries. #!/bin/sh # # git hook script that notifies you to update the islandora worbench integration # module if the last commit messsages contain the word 'module.' also notifies # you if you need to run setup.py to install newly added libraries. # # to enable this hook, place create a file in your .git/hooks directory named 'post-merge'. if git log -n --format=format:"%s" | grep -qi module; then echo "note: make sure you are running the latest version of the islandora workbench integration module." fi if git log -n --format=format:"%s" | grep -qi setup; then echo "note: you need to run 'python setup.py install' to install some newly added python libraries." fi to use this reminder, place the script above at islandora_workbench/.git/hooks/post-merge and make it executable (i.e., chmod +x post-merge). current maintainer mark jordan contributing bug reports, improvements, feature requests, and prs welcome. before you open a pull request, please open an issue. if you open a pr, please check your code with pycodestyle: pycodestyle --show-source --show-pep --ignore=e ,w --max-line-length= . also provide tests where applicable. tests in workbench fall into two categories: unit tests (that do not require islandora) which are all in tests/unit_tests.py and can be run with python tests/unit_tests.py unit tests on workbench's drupal fields handlers (these also does not require islandora) are in tests/field_tests.py and can be run with python tests/field_tests.py integration tests that require a live islandora instance running at http://localhost: , which are all in tests/islandora_tests.py and can be run with python tests/islandora_tests.py the islandora playbook is recommended way to deploy the islandora used in these tests. note that if an islandora integration test fails, nodes and taxonomy terms created by the test before it fails may not be removed from islandora. some integration and field tests output text that beings with "error:." this is normal, it's the text that workbench outputs when it finds something wrong (which is probably what the test is testing). successful test (whether they test for success or failure) runs will exit with "ok". if you can figure out how to suppress this output, please visit this issue. if you want to run the tests within a specific class in one of these files, include the class name like this: python tests/unit_tests.py testcomparestings contributing to documentation contributions to islandora workbench's documentation are welcome. if you have a suggestion for improving the documentation, please open an issue on this repository's queue and tag your issue "documentation". license about a command-line tool for managing content in an islandora repository resources readme license unlicense license releases no releases published packages no packages published contributors languages python . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. serious money - wikipedia serious money from wikipedia, the free encyclopedia jump to navigation jump to search satirical play by caryl churchill serious money written by caryl churchill music by ian dury date premiered place premiered royal court theatre london, england original language english subject the world of arbitrageurs, junk bonds and greenmail, white knights and corporate raiders genre comedy, satire setting s, london and new york serious money is a satirical play written by caryl churchill first staged in london in . its subject is the british stock market, specifically the london international financial futures and options exchange (liffe). often considered one of churchill's finest plays along with cloud ( ) and top girls ( ), it is notable for being largely written in rhyming couplets. contents plot summary productions awards and nominations notes references further reading external links plot summary[edit] the plot follows scilla and jake who are enjoying the pleasures and the comforts of the upper class. but the story climaxes when jake todd turns up murdered during the first few scenes due to his underground trading. scilla takes it upon herself to find her brother's killer and the money he was dealing. she later finds out that he was being investigated by the department of trade and industry. though she does not find the killer, she finds the american business woman marylou banes with whom jake was dealing. marylou banes offers her a fresh start. the story takes place around the stock market troubles in britain. aside from that a second story follows billy corman's and zac zackerman's attempt to take over the albion company from duckett. in between this takeover corman attempts to get jacinta condor and nigel ajibala [who are the foreigners with an interest in his takeover] to buy shares in his company. they support corman but decide to give their bid to duckett in the end. the plot ends with greville todd in jail, corman appointed as a lord, and scilla happily working for marylou banes. productions[edit] serious money was developed at the royal court theatre in london, directed by max stafford clark. it opened in march and was an immediate hit. after its initial engagement it transferred to wyndham's theatre in the west end, where it enjoyed an extended run. serious money was produced on broadway, opening on february , at the royale theatre. some changes were made for the broadway run, including a reference to the stock market crash of . the show closed after previews and performances. the play has fared better at american regional companies, such as the berkeley repertory theatre in california. a successful revival was given at the u.k.'s birmingham repertory theatre in . on july , serious money opened at the shaw festival in niagara-on-the-lake, on, directed by eda holmes.[ ] a run of serious money was put on by students of bristol old vic theatre school at circomedia, portland square from november until november . awards and nominations[edit] awards laurence olivier award for best new play obie awards best new american play notes[edit] ^ "archived copy". archived from the original on - - . retrieved - - .cs maint: archived copy as title (link) references[edit] gussow, mel ( february ). "the stage: serious money". new york times. retrieved - - . rich, frank ( december ). "the stage: serious money". new york times. retrieved - - . further reading[edit] churchill, caryl ( ). serious money: a city comedy (first ed.). london: methuen. isbn  - - - . external links[edit] serious money at the internet broadway database serious money at the internet broadway database serious money at the internet off-broadway database awards for serious money v t e laurence olivier award for best new play – dear daddy ( ) the fire that consumes ( ) whose life is it anyway? ( ) betrayal ( ) the life and adventures of nicholas nickleby ( ) children of a lesser god ( ) another country ( ) glengarry glen ross ( ) benefactors ( ) red noses ( ) les liaisons dangereuses ( ) serious money ( ) our country's good ( ) racing demon ( / ) dancing at lughnasa ( ) death and the maiden ( ) six degrees of separation ( ) arcadia ( ) broken glass ( ) skylight ( ) stanley ( ) closer ( ) the weir ( ) goodnight children everywhere ( ) –present blue/orange ( ) jitney ( ) vincent in brixton ( ) the pillowman ( ) the history boys ( ) on the shore of the wide world ( ) blackbird ( ) a disappearing number ( ) black watch ( ) the mountaintop ( ) clybourne park ( ) collaborators ( ) the curious incident of the dog in the night-time ( ) chimerica ( ) king charles iii ( ) hangmen ( ) harry potter and the cursed child ( ) the ferryman ( ) the inheritance ( ) leopoldstadt ( ) v t e obie award for best new american play the blacks ( ) who'll save the plowboy? ( ) six characters in search of an author ( ) home movies ( ) the old glory ( ) the indian wants the bronx ( ) what the butler saw ( ) the basic training of pavlo hummel ( ) bad habits / the hot l baltimore / when you comin' back, red ryder? ( ) short eyes ( ) for colored girls who have considered suicide / when the rainbow is enuf ( ) sister mary ignatius explains it all for you ( ) fob ( ) cloud ( ) spunk ( ) floyd collins / love! valour! compassion! ( ) how i learned to drive / rent / the vagina monologues ( ) golden child / hedwig and the angry inch ( ) bug / the romance of magno rubio ( ) a very merry unauthorized children's scientology pageant ( ) ruined ( ) circle mirror transformation and the aliens ( ) the elaborate entrance of chad deity ( ) miles ( ) detroit / grimly handsome ( ) appropriate / an octoroon ( ) hamilton ( ) guards at the taj ( ) underground railroad game / oslo ( ) describe the night ( ) what the constitution means to me ( ) retrieved from "https://en.wikipedia.org/w/index.php?title=serious_money&oldid= " categories: plays english plays broadway plays plays by caryl churchill west end plays off-broadway plays obie award-winning plays laurence olivier award-winning plays hidden categories: cs maint: archived copy as title articles with short description short description is different from wikidata articles with ibdb links navigation menu personal tools not logged in talk contributions create account log in namespaces article talk variants views read edit view history more search navigation main page contents current events random article about wikipedia contact us donate contribute help learn to edit community portal recent changes upload file tools what links here related changes upload file special pages permanent link page information cite this page wikidata item print/export download as pdf printable version languages français italiano edit links this page was last edited on december , at :  (utc). text is available under the creative commons attribution-sharealike license; additional terms may apply. by using this site, you agree to the terms of use and privacy policy. wikipedia® is a registered trademark of the wikimedia foundation, inc., a non-profit organization. privacy policy about wikipedia disclaimers contact wikipedia mobile view developers statistics cookie statement github - reidmorrison/semantic_logger: semantic logger is a feature rich logging framework, and replacement for existing ruby & rails loggers. skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} reidmorrison / semantic_logger notifications star fork semantic logger is a feature rich logging framework, and replacement for existing ruby & rails loggers. logger.rocketjob.io/ apache- . license stars forks star notifications code issues pull requests actions projects security insights more code issues pull requests actions projects security insights master switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branches tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit reidmorrison validate io object can write when supplied to an appender … b c jul , validate io object can write when supplied to an appender b c git stats commits files permalink failed to load latest commit information. type name latest commit message commit time .github     docs     lib     test     .gitignore     .rubocop.yml     changelog.md     gemfile     license.txt     readme.md     rakefile     semantic_logger.gemspec     view code semantic logger documentation upgrading to semantic logger v . logging destinations rails rocket job optional dependencies v upgrade notes install author versioning readme.md semantic logger semantic logger is a feature rich logging framework, and replacement for existing ruby & rails loggers. https://logger.rocketjob.io/ documentation semantic logger guide upgrading to semantic logger v . with some forking frameworks it is necessary to call reopen after the fork. with v . the workaround for ruby . crashes is no longer needed. i.e. please remove the following line if being called anywhere: semanticlogger::processor.instance.instance_variable_set(:@queue, queue.new) logging destinations logging to the following destinations are all supported "out-of-the-box": file screen elasticsearch. (use with kibana for dashboards and visualizations) graylog bugsnag newrelic splunk mongodb honeybadger sentry http tcp udp syslog add any existing ruby logger as another destination. roll-your-own semantic logger is capable of logging thousands of lines per second without slowing down the application. traditional logging systems make the application wait while the log information is being saved. semantic logger avoids this slowdown by pushing log events to an in-memory queue that is serviced by a separate thread that only handles saving log information to multiple destinations / appenders. rails when running rails, use rails_semantic_logger instead of semantic logger directly since it will automatically replace the rails default logger with semantic logger. rocket job checkout the sister project rocket job: ruby's missing batch system. fully supports semantic logger when running jobs in the background. complete support for job metrics sent via semantic logger to your favorite dashboards. optional dependencies the following gems are only required when their corresponding appenders are being used, and are therefore not automatically included by this gem: bugsnag appender: gem 'bugsnag' mongodb appender: gem 'mongo' . . or above newrelic appender: gem 'newrelic_rpm' syslog appender: gem 'syslog_protocol' . . or above syslog appender to a remote syslogng server over tcp or udp: gem 'net_tcp_client' splunk appender: gem 'splunk-sdk-ruby' elasticsearch appender: gem 'elasticsearch' kafka appender: gem 'ruby-kafka' v upgrade notes the following changes need to be made when upgrading to v : ruby v . / jruby v . is now the minimum runtime version. replace calls to logger#with_payload with semanticlogger.named_tagged. replace calls to logger#payload with semanticlogger.named_tags. mongodb appender requires mongo ruby client v or greater. appenders now write payload data in a seperate :payload tag instead of mixing them directly into the root elements to avoid name clashes. as a result any calls like the following: logger.debug foo: 'foo', bar: 'bar' must be replaced with the following in v : logger.debug payload: {foo: 'foo', bar: 'bar'} similarly, for measure blocks: logger.measure_info('how long is the sleep', foo: 'foo', bar: 'bar') { sleep } must be replaced with the following in v : logger.measure_info('how long is the sleep', payload: {foo: 'foo', bar: 'bar'}) { sleep } the common log call has not changed, and the payload is still logged directly: logger.debug('log this', foo: 'foo', bar: 'bar') install gem install semantic_logger to configure a stand-alone application for semantic logger: require 'semantic_logger' # set the global default log level semanticlogger.default_level = :trace # log to a file, and use the colorized formatter semanticlogger.add_appender(file_name: 'development.log', formatter: :color) if running rails, see: semantic logger rails author reid morrison contributors versioning this project uses semantic versioning. about semantic logger is a feature rich logging framework, and replacement for existing ruby & rails loggers. logger.rocketjob.io/ topics elasticsearch bugsnag splunk logging syslog rails-semantic-logger resources readme license apache- . license releases tags packages no packages published contributors + contributors languages ruby . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. vision # : opportunity | everybody's libraries everybody's libraries libraries for everyone, by everyone, shared with everyone, about everything skip to content home about about the free decimal correspondence free decimal correspondence ils services for discovery applications john mark ockerbloom the metadata challenge ← wikipedia and the deep backfile vision # : when we were very young by a. a. milne → vision # : opportunity posted on december , by john mark ockerbloom in just thirty days from when this post appears, a new crop of works will join the public domain. exactly what will come out of copyright will vary by country.  in europe and other places with “life+ years” copyright terms, works by authors who died in will join the public domain on january , .  in countries that still have “life+ years” terms, works by authors who died in will.  and in the united states, copyrights that were secured in that are still in force will expire. as an american, i’m especially excited about the works in that last set.  for most of the st century to date, almost nothing entered the public domain in the us, after a law extended copyright terms by years.  then last year, all copyrights still active from expired, and we finally had a public domain day here with lots of new published works that many people noted and celebrated.  and it looks like we’re going to get another big set of works from in the public domain next month. last year, i was so excited about the coming of the first substantial public domain day here in a long time that i wrote advent calendar posts every day in december, discussing works from that would (and did!) join the public domain in .  it was a lot of fun, but also a lot of work. i thought it worth the effort, though, to note such a big change in the copyright environment we’d grown accustomed to.  but i wasn’t planning to do all that work again this year. a few thing, though, have made me reconsider, at least in part.   one of them was an article i saw today about a new collection of stories by zora neale hurston being published in early , hitting a straight lick with a crooked stick.  now recognized as a major th century american writer, hurston published works in a variety of genres and forums from the s through the s.  however, she was not well known outside of african american and literary scholarship circles until the s, when alice walker wrote an article in ms. magazine in appreciation of her work, and her novel their eyes were watching god was reprinted and became a best-seller. hurston’s new collection brings back into print a number of her early short stories, which the publisher’s blurb describes as “lost” and “in forgotten periodicals and archives”.  my first thought on reading the blurb was to be annoyed about the erasure of the librarians and archivists who collected, cataloged, and preserved those publications and thereby ensured that they were not, in fact, lost or forgotten.  but then, on further reflection, i realized that for much of the general public, they might as well have been lost, since many people do not have easy access to the libraries and archives that hold them. one of hurston’s early stories, “drenched in light”, appeared in the december issue of opportunity: journal of negro life, which published a variety of articles, stories, poems, studies, and art by african americans.  the journal began in , and hathitrust opened access to its first volume on public domain day at the start of this year.  my listing for the journal also includes some later volumes of the magazine, since as it turns out, the publishers did not renew copyrights for issues prior to the s.  (most of its authors didn’t renew their contributions either, as you can see in the full set of renewals we’ve found for opportunity.)  my listings do not yet, however, include the volume.  hathitrust has a scan of it, but neither they nor anyone else has yet opened access to it, presumably because no one with a scan feels confident enough about its rights status to do so yet.  i expect it to become visible in days, when ’s remaining copyrights expire in the us and hathitrust opens its volumes from . those without access to opportunity in print might be able to read “drenched in light” before then in hurston’s previously published complete stories collection.  but they won’t be able to view the rich context in which it first appeared, from all the other writers and artists who had work published in opportunity in – even though as i noted in one of last year’s advent calendar entries, many early african-american publications, including many of hurston’s stories, did not get renewed copyrights. between now and public domain day , i’ll be posting on works published in , both the famous and the obscure, that i look forward to coming into clearer view in the new year.  some will be joining the public domain on january .  some, like the opportunity issues, are already in the public domain, but are not as widely accessible as they could be.  (though many of them can be found in my library, and perhaps in yours.)  i won’t write a post every day, but i hope to publish a fair number on a variety of works by the new year.  you’re welcome to participate, either directly, such as by suggesting works or contributing comments, or indirectly, such as by contributing further information about what’s in the public domain or soon will be.  (our copyright information for opportunity, for instance, is part of penn’s serials copyright knowledge base that you can add to.) i hope public domain day will be an annual cause for celebration in the united states and elsewhere.  i want new arrivals to the public domain to become routine, but not taken for granted, lest the public domain be frozen again as it was for far too many years.  i hope this series of posts, and other work being done by libraries, readers, and fans of the public domain worldwide, help us recognize the treasures of the public domain and bring more of them to light. share this: email print twitter facebook reddit like this: like loading... related about john mark ockerbloom i'm a digital library strategist at the university of pennsylvania, in philadelphia. view all posts by john mark ockerbloom → this entry was posted in publicdomain, serials. bookmark the permalink. ← wikipedia and the deep backfile vision # : when we were very young by a. a. milne → search for: rss feed pages about free decimal correspondence ils services for discovery applications john mark ockerbloom the metadata challenge recent posts public domain day : honoring a lost generation counting down to in the public domain from our subjects to yours (and vice versa) everybody’s library questions: finding films in the public domain build a better registry: my intended comments to the library of congress on the next register of copyrights recent comments jason on public domain day : honoring a lost generation john mark ockerbloom on public domain day : honoring a lost generation norma bruce on public domain day : honoring a lost generation brent reid on counting down to in the public domain john mark ockerbloom on counting down to in the public domain archives january december march january december november october september july june january december october june january december september january october september july may january january june january october august april march february january december july may january october september june may april january december november october september august july june may april march february january december october september august july june may april march january december november october september august july june may april march february january december november access for all open access news copyrights and wrongs copyfight copyright & fair use freedom to tinker lawrence lessig general library-related news and comment lisnews teleread interesting folks jessamyn west john scalzi jonathan rochkind k. g. schneider karen coyle lawrence lessig leslie johnston library loon lorcan dempsey paul courant peter brantley walt crawford metadata and friends planet cataloging shiny tech boing boing o’reilly radar planet code lib tales from the repository repositoryman writing and publishing if:book making light publishing frontier everybody's libraries blog at wordpress.com. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. %d bloggers like this: open data day - google groups groups conversations all groups and messages send feedback to google help account search maps youtube play news gmail meet chat contacts drive calendar translate photos duo chrome shopping finance docs sheets slides books blogger hangouts keep jamboard earth collections arts and culture google ads podcasts stadia travel forms more from google sign in groups open data day conversations about privacy •Â terms open data day – of   hi there! really excited you could join this mailing list! people here are helping organize open data day events in their local cities - please feel free to ask for help, ideas and suggestions from your peers located around the world. dave   mark all as read report abusive group selected   james hamiltonjul promoting some keynote events/lectures during open data day hello this week we published a blog post inviting organisations and individuals from around the world unread,promoting some keynote events/lectures during open data day hello this week we published a blog post inviting organisations and individuals from around the world jul  james hamilton jun what we learned about open data day hello everyone i created a googledoc here with the list of ideas from the blogpost. please feel free unread,what we learned about open data day hello everyone i created a googledoc here with the list of ideas from the blogpost. please feel free jun  james hamilton, … david eaves jun farewell stephen, many thanks huge thank you to stephen - am so grateful for all the work you put into making open data the success unread,farewell stephen, many thanks huge thank you to stephen - am so grateful for all the work you put into making open data the success jun  stephen abbott pugh, … nbani friday jun open data day on saturday th march? hi kate, thanks so much for your mail however lekeh development foundation ledef nigeria is unread,open data day on saturday th march? hi kate, thanks so much for your mail however lekeh development foundation ledef nigeria is jun  stephen abbott pughmay survey: tell us what you think of open data day thank you for taking part in open data day and coming together to celebrate open data wherever unread,survey: tell us what you think of open data day thank you for taking part in open data day and coming together to celebrate open data wherever may  stephen abbott pugh, contenidos eduna apr open data day - it's a wrap! thanks stephen for everything. greetings to the work team. att. mónica el viernes, de abril de unread,open data day - it's a wrap! thanks stephen for everything. greetings to the work team. att. mónica el viernes, de abril de apr  stephen abbott pughapr take part in eu open data days - apply by st may dear all, open knowledge foundation is happy to announce that we have partnered with the publications unread,take part in eu open data days - apply by st may dear all, open knowledge foundation is happy to announce that we have partnered with the publications apr  thiago ferauche, stephen abbott pugh mar results of open data day - santos/sp-brazil thanks for sharing the results of your open data day event, thiago. it was great to see so many unread,results of open data day - santos/sp-brazil thanks for sharing the results of your open data day event, thiago. it was great to see so many mar  paromita basakmar invitation to boil's "bangladesh open data day celebration " dear everyone, greetings from bangladesh open innovation lab(boil)! hope you are doing well. i am unread,invitation to boil's "bangladesh open data day celebration " dear everyone, greetings from bangladesh open innovation lab(boil)! hope you are doing well. i am mar  stephen abbott pugh, … nbani friday feb new open data day events listings and search dear colleagues, thanks so much for your email. i'm just a young lady with passion to defend and unread,new open data day events listings and search dear colleagues, thanks so much for your email. i'm just a young lady with passion to defend and feb  stephen abbott pughfeb enter the open data day photo and video competition hi everyone, every year, hundreds of open data day events take place to celebrate open data in unread,enter the open data day photo and video competition hi everyone, every year, hundreds of open data day events take place to celebrate open data in feb  stephen abbott pughfeb meet the organisations receiving open data day mini-grants the open knowledge foundation is happy to announce the list of organisations from all over the world unread,meet the organisations receiving open data day mini-grants the open knowledge foundation is happy to announce the list of organisations from all over the world feb  sadik shahadufeb help me raise funds for art + feminism on my birthday ( february ) dear colleagues, apologies for crossposting. this year for my birthday (february ), i would unread,help me raise funds for art + feminism on my birthday ( february ) dear colleagues, apologies for crossposting. this year for my birthday (february ), i would feb  stephen abbott pugh, … oksana izakova feb announcing a new partner for open data day mini-grants hello everybody! cool news at the beginning of the working week, we are already in the process of unread,announcing a new partner for open data day mini-grants hello everybody! cool news at the beginning of the working week, we are already in the process of feb  stephen abbott pugh, moses kwereba gathua jan how to run your open data day event online in hello. i intend to have an open data day that will bring a small number of participants together in a unread,how to run your open data day event online in hello. i intend to have an open data day that will bring a small number of participants together in a jan  stephen abbott pughjan launching the open data day mini-grant scheme. apply by pm gmt on friday th february hi everyone, i am thrilled to announce that once again the open knowledge foundation is giving out unread,launching the open data day mini-grant scheme. apply by pm gmt on friday th february hi everyone, i am thrilled to announce that once again the open knowledge foundation is giving out jan  sadik shahadu / / deadline to submit a session proposal for mozfest is fast-approaching! hello, *apologies for cross-posting* the deadline to submit a session proposal for mozfest is unread,deadline to submit a session proposal for mozfest is fast-approaching! hello, *apologies for cross-posting* the deadline to submit a session proposal for mozfest is / /  james hamilton / / thank you for your feedback about open data day. here’s what we learned. hello many thanks to those who replied to our survey on open data day. we really appreciate you unread,thank you for your feedback about open data day. here’s what we learned. hello many thanks to those who replied to our survey on open data day. we really appreciate you / /  james hamilton / / survey - tell us how you think we can better support open data day greetings everyone. while stephen is away, i'm helping out with open data day . today okf unread,survey - tell us how you think we can better support open data day greetings everyone. while stephen is away, i'm helping out with open data day . today okf / /  stephen abbott pugh / / open data day will take place on saturday th march dear open data day members, following consultation with the community here, i am pleased to announce unread,open data day will take place on saturday th march dear open data day members, following consultation with the community here, i am pleased to announce / /  thiago ferauche, … stephen abbott pugh / / open data day ? dear all, thanks to everyone for confirming that saturday th march looks like a great date for open unread,open data day ? dear all, thanks to everyone for confirming that saturday th march looks like a great date for open / /  paulina wilner, ralf janser / / guess an android for outlook would be a bit more helpfull besides, this post is spam, plz stop this unread, guess an android for outlook would be a bit more helpfull besides, this post is spam, plz stop this / /  stephen abbott pugh, … khumbo bangala chirembo / / open data day : it’s a wrap! hello, we remain grateful for the grants we received from the open knowledge foundation which enabled unread,open data day : it’s a wrap! hello, we remain grateful for the grants we received from the open knowledge foundation which enabled / /  markos / / crises hídricas e o coronavírus - o que fazer? participe do "encontro das Ã�guas" e conheça mais sobre o tema "Ã�gua". informações unread,crises hídricas e o coronavírus - o que fazer? participe do "encontro das Ã�guas" e conheça mais sobre o tema "Ã�gua". informações / /  steven clift / / coronavirus tech facebook group coronavirus tech facebook group for all of those in civic tech land hoping to share tech across unread,coronavirus tech facebook group coronavirus tech facebook group for all of those in civic tech land hoping to share tech across / /  benjamin akinmoyeje, stephen abbott pugh / / windhoek opendataday was not on the map after registering the event twice. thank you, stephen. i appreciate your support and timely assistance. kind regards, benjamin on sun, unread,windhoek opendataday was not on the map after registering the event twice. thank you, stephen. i appreciate your support and timely assistance. kind regards, benjamin on sun, / /  michael polidori / / open data day - datahub.io hi, i'm michael, i work at datopian with rufus pollock and others. we're really happy to unread,open data day - datahub.io hi, i'm michael, i work at datopian with rufus pollock and others. we're really happy to / /  stephen abbott pugh / / celebrating the tenth open data day on saturday th march (blogpost from okfn ceo) hi everyone, as things gear up for the tenth open data day tomorrow (saturday th march ), unread,celebrating the tenth open data day on saturday th march (blogpost from okfn ceo) hi everyone, as things gear up for the tenth open data day tomorrow (saturday th march ), / /  oksana izakova, stephen abbott pugh / / who can help with changing info about organizers at map? dear, stephen many thanks! you're the best for such promptness:)😊 oksana izakova | project unread,who can help with changing info about organizers at map? dear, stephen many thanks! you're the best for such promptness:)😊 oksana izakova | project / /  chenda kun / / open data day in #cambodia hello friends and network, i would like to share you the information about open data day in cambodia unread,open data day in #cambodia hello friends and network, i would like to share you the information about open data day in cambodia / /    search clear search close search google apps main menu none stake-bleeding attacks on proof-of-stake blockchains peter gaži iohk email: peter.gazi@iohk.io aggelos kiayias university of edinburgh & iohk email: aggelos.kiayias@ed.ac.uk alexander russell university of connecticut email: acr@cse.uconn.edu abstract—we describe a general attack on proof-of-stake (pos) blockchains without checkpointing. our attack leverages transaction fees, the ability to treat transactions “out of context,” and the standard longest chain rule to completely dominate a blockchain. the attack grows in power with the number of honest transactions and the stake held by the adversary, and can be launched by an adversary controlling any constant fraction of the stake. with the present statistical profile of blockchain protocols, the attack can be launched given a few years of prior blockchain operation; hence it is within the realm of feasibility for pos pro- tocols. most importantly, it demonstrates how closely transaction fees and rewards are coupled with the security properties of pos protocols. more broadly, our attack must be reflected and countered in any future pos design that avoids checkpointing, as well as any effort to remove checkpointing from existing protocols. we describe several mechanisms for protecting against the attack that include context-sensitivity of transactions and chain density statistics. i. introduction proof-of-stake (pos) blockchain protocols were envisioned as a solution to the immense energy demands of miner nodes in proof-of-work (pow) based blockchain systems. pos was proposed in discussions in the bitcoin forum and adopts the principle that the right to produce a new blockchain block should be awarded to a stakeholder with probability proportional to their current stake, as documented by the blockchain itself. conceivably, such a blockchain discipline could yield desirable ledger properties without consuming significant real-world resources: no substantial energy expenditure would have to be invested to run the protocol. such protocols would naturally replace the assumption of an honest majority of hashing power with the assumption of an honest majority of stake in the system. while the potential virtues of such pos protocols are substantial, it was argued early on that the design of such schemes could be particularly challenging (see, e.g., [bgm ]) or perhaps even infeasible (see, e.g., [poe ]). one particularly critical threat in the pos setting was documented by buterin [but ] who referred to it as the problem of “long-range attacks” (also related to the concept of “costless-simulation” in, e.g., [poe ]). this refers to the ability of a minority set of stakeholders to execute the blockchain protocol starting from the genesis block (or any sufficiently old state) and produce a valid alternative history of the system. confronted with such alternative history and no other outside see e.g., the post by user quantummechanic https://bitcointalk.org/index. php?topic= . and the ensuing discussion in . information beyond the genesis block, a freshly joining node would have no ability to reliably distinguish between this alternate history and the actual history. it follows that with such an attack a minority set of stakeholders could double- spend or erase past transactions, violating the fundamental persistence property of the resulting ledger. in the same blog post [but ], however, a glimmer of hope was also provided: it was observed that the blockchains produced by such a minority set of stakeholders may have characteristics that could be used to distinguish them from the actual blockchain maintained by the honest majority. in particular, if timestamps are included in each block, it would be the case that a simple simulation of the protocol by a minority set of stakeholders would result in a blockchain that is more sparse in the time domain and, as a result, a longest chain rule at any particular moment would favor the blockchain produced by the honest parties. a number of pos protocols were proposed and implemented, e.g., the ppcoin [kn ] and nxt cryptocurrencies [com ]. recent efforts have additionally begun to rigorously analyze security in the pos setting, leading to protocols with formal guarantees such as algorand [mic ], ouroboros [krdo ], snow white [bps ], and ouroboros praos [dgkr ]. for the sake of the upcoming exposition, it will be useful to split these protocols into two classes: ) eventual-consensus protocols that apply some form of a longest-chain rule to the blockchain. in this setting the immutability of a block increases gradually with the number of blocks created on top of it. ) blockwise-ba protocols that achieve the immutability of every single block via a full execution of a byzantine agreement (ba) protocol before moving on to production of any subsequent block. of the above-listed pos protocols, algorand is a blockwise- ba protocol, while all the other protocols aim for eventual consensus. looking ahead, our investigation proves relevant for the design of eventual-consensus pos protocols; we mention algorand here for the sake of comparison. all of these protocols had to confront the problem of long- range attacks, which was eventually understood to be even more serious than originally thought. the additional complication— aptly named “posterior corruption” in [bps ]—observes that simply examining time stamps will not be sufficient for dealing with long-range attacks. in fact, an attacker can attempt to note that we include only pos protocols for which a a sufficiently detailed whitepaper exists, cf. fig. . https://bitcointalk.org/index.php?topic= . https://bitcointalk.org/index.php?topic= . corrupt the secret keys corresponding to accounts that possessed substantial stake at some past moment in the history of the system. assuming that such accounts have small (or even zero) stake at the present time, they are highly susceptible to bribery (or simple carelessness) which would expose their secret keys to an attacker. armed with such a set of (currently low-stake) keys, the attacker can mount the long-range attack and in this case the density of the resulting blockchain in the time domain could be indistinguishable from the honestly generated public blockchain. to address the posterior corruption and other long range attacks, a number of mitigating approaches have been employed (sometimes in conjunction) and can be organised into three types: (i) introduce some type of frequent checkpointing mechanism, that enables nodes to be introduced to the system by providing them a relatively recent block. (ii) employ key-evolving cryptography [fra ] that calls for users to evolve their secret keys so that past signatures cannot be forged, even when a complete exposure of their current secret state takes place. (iii) enforce strict chain density statistics, where the expected number of participating players at any step of the protocol is known; thus alternative protocol execution histories that exhibit significantly smaller participation can be immediately dismissed as adversarial. out of the above-mentioned pos schemes, all eventual- consensus protocols (i.e., nxt, ppcoin, ouroboros, snow white, and ouroboros praos) employ the first mitigation strategy and assume some form of checkpointing. ouroboros praos employs the first and the second approach (key-evolving signatures) to additionally handle adaptive corruptions, while algorand adopts the second and the third approach (strict chain density statistics) to the same end. it is worth appreciating the distinction between these methods to address posterior corruption and long range attacks. checkpointing neutralizes the problem entirely by enabling nodes to ignore alternative chains that are not consistent with the most recent checkpoint known to the node. however, this comes with a significant model restriction: for any type of checkpointing to work, nodes must either be frequently online (so they adopt a recent checkpoint block they have received from the network as active participants) or receive reliable (trusted) information when (re)introduced to the system after a long period of being offline (or when they first join). this amounts to an additional trust assumption necessary for secure operation of the system, and as such is clearly undesirable in a decentralized, permissionless setting. similarly, enforcing strict chain density statistics requires reliably estimating the number of participants at any stage of the protocol and is also model-restricting: the protocol will not be able to operate in an environment permitting an arbitrary number of parties to be invoked for execution. on the other hand, key evolving cryptography is a more algorithmic mitigation that comes with a minimal requirement on the model: nodes should merely have the ability to erase private state. algorithmic mitigations seem clearly preferable to model-restricting ones whenever available. it is important to observe that key-evolving cryptography, the only algorithmic mitigation listed above, focuses specifically on the issue of posterior corruption; in particular, it is unclear if key evolution can thwart all possible long-range attacks. thus, our work is motivated by the following question: is key-evolving cryptography sufficient to prevent all possible long-range attacks, and in this way achieve pos that does not need to rely on any model- restricting mitigations? a. our results we answer the above question in the negative by introducing a new class of long-range attacks against eventual-consensus pos protocols, called stake-bleeding attacks. stake-bleeding is an effective strategy for mounting a long-range attack that does not rely on posterior corruption; thus it cannot be prevented by key-evolving cryptographic techniques. the only requirement for the attack is that the underlying blockchain protocol allows transaction fees to be used as rewards for running the protocol, a standard feature in blockchain protocols to incentivize participation in ledger maintenance. the idea of the attack is as follows: an attacking stakeholder minority coalition launches a long-range attack that at the same time includes all transactions that have been posted in the honestly maintained public blockchain. given that the fees from the transactions will be used to reward the ones that produce the blocks in some way, a large number of the transaction fees in the private attacker blockchain will be collected by the malicious coalition (fees originating from accounts that do not exist in the private chain would have to be forfeited). assuming the blockchain system has run for a substantial period of time, it is conceivable that the accrued transaction fees will turn the attacking minority coalition into a majority that will be able to advance the private blockchain at a speed faster than the honestly maintained public blockchain. due to the costless simulation nature of the long-range attack it would be possible to mount a stake-bleeding attack from an arbitrary point in the past (assuming checkpointing is either not used or extends sufficiently back into the past) and thus the attacking coalition could rewrite the history of transactions. we prove that the theoretical bound that the attacker would have to go back in the history of the pos system to launch the attack is ≈ ( − αa)/f where αa denotes the relative stake of the minority coalition and f is the relative fees that are made available per unit of time. using the bitcoin blockchain as a basis for a feasibility evaluation, on november th, the -day average of trans- action fees per block was . btc. the btc in circulation on this same day are about . million, giving a relative fee rate of . · − . it follows that, at the current rate, a hypothetical pos blockchain with the same fee–currency profile as bitcoin would be of theoretical interest only. nevertheless, with a -fold increase in total transaction fees per unit of note that this is just for the sake of example as the bitcoin blockchain is immune to long-range attacks. the point to consider is a hypothetical pos- based blockchain that has the same statistical characteristics as the bitcoin blockchain. https://www.smartbit.com.au/charts/transaction-fees-per-block https://blockchain.info/charts/total-bitcoins attacker relative stake years of operation . . . . . . . . fig. : years of blockchain history needed to launch a stake- bleeding attack assuming a minimum relative transaction fee volume of . · − per minute (a -fold increase based on recent ( rd of november ) values drawn from the bitcoin blockchain) in a hypothetical pos blockchain. time a stake bleeding attack would be feasible, requiring less than years worth of history for a % attacker, cf. figure . in particular, this indicates that stake-bleeding attacks must be a design consideration in the general threat model for long-lived pos blockchain systems. we then consider possible mitigation strategies for stake bleeding attacks. first, one can observe that stake bleeding attacks would result in a private blockchain that initially exhibits a sparse block density in the time-domain that gradually increases. this may be atypical for honestly maintained blockchains and could be used as part of the chain selection rule. nevertheless, a different mitigation that is much simpler to implement is to introduce context in each transaction: a context-sensitive transaction is a transaction that includes the hash of the blockchain at some recent prior point. it is easy to see that such transactions cannot be transferred to an alternative blockchain that is privately maintained by a malicious set of stakeholders. we note that this mitigation has been considered before for a different purpose; see [lar ] where it was employed to prevent an attacker to transfer “coin- age-destroyed” to a secretly maintained blockchain. to conclude we illustrate a systematized presentation of long-range attacks, their requirements and the way they can be mitigated in figure . we observe that stake bleeding attacks would adversely affect all currently proposed eventual- consensus pos protocols if the checkpointing mechanism was removed. therefore, it has to be taken into account in any future effort to remove the undesirable checkpointing mechanism from these protocols, as well as when designing new eventual-consensus pos protocols that do not rely on checkpointing. introducing context-sensitivity in transactions is a simple “algorithmic” mitigation mechanism that can thus be added to the design arsenal of pos blockchain protocols in order to relax model assumptions such as negligible transaction fees or frequent checkpointing. ii. preliminaries a. the computational model the stake-bleeding attack can be launched even in a generous computational model that affords many advantages note that this does not necessarily mean that the fee per transaction need to increase; it would be sufficient for the blockchain system to process a larger number of transactions per unit of time. actually a -fold increase (from mb to mb) was among various proposals that were vigorously debated in the period - , ultimately leading to a hard fork for the bitcoin blockchain. for the original rationale behind the -fold increase see http: //gavintech.blogspot.co.uk/ / /twenty-megabytes-testing-results.html. to the blockchain protocol: • the adversary requires no control over message deliv- ery: the attack can be launched in a fully synchronous communication and computation environment, with all messages—including those generated by the adversary— delivered by reliable broadcast. • the adversary requires no dynamic corruptions: the attack can be launched by a fixed collection of adversarial parties determined at the beginning of the execution. • the adversary requires no introduction of new parties or deactivation of honest parties: the attack can be launched with a static population of fully participating parties. below, we outline a simple, strong computational model reflecting the features mentioned above. the model is obtained by suitably strengthening the framework from [krdo ], and is sufficient to support our attack. we emphasize that adopting such a strong model only broadens the applicability and strength of the attack, which can be launched in typical blockchain models that provide the adversary significantly more power [gkl ], [pss ], [bps ], [dgkr ]. a) time, slots, and synchrony: we consider a setting where time is unambiguously divided into discrete units called slots; participating parties are equipped with synchronized clocks that indicate the current slot. the model additionally permits reliable, synchronous broadcast: each party may broad- cast, at the beginning of each time slot, a message which is then reliably delivered to all other parties by the end of the slot. b) adversarial corruption: the model involves a fixed collection of participating parties u. an adversary a in our model is associated with a fixed subset of adversarial parties. we overload the symbol a to denote the subset of adversarial parties; the set of honest parties is denoted h. honest parties are active at all times, receiving all messages sent by the other parties, and follow the protocol under consideration. the adversary is activated in each slot, and may arbitrarily direct the behavior of adversarial parties. note that messages sent by adversarial parties are subject to the broadcast constraint—they are synchronously delivered to all honest parties. c) the init functionality; initial stake and transactions; the environment: the model is associated with an (idealized) initialization functionality init. the init functionality is parameterized by an initial stake distribution. this is an assignment of nonnegative numbers to the players which we write as s = ( (u ,s ), . . . , (un,sn) ) . the functionality inits operates as follows: • prior to any computation of the parties, the functionality determines, for each party u ∈ u, a pair of public and private keys (pku,sku ). • during the protocol, the functionality responds to a message from the user u of the form key with sku , the secret key sku of the user u. • during the protocol, the functionality responds to any message of the form genesis block with the “genesis block” b consisting of the initial stake distribution and the public keys associated with the users. the model introduces a further entity: the environment z. in our setting, the environment is merely responsible for generating http://gavintech.blogspot.co.uk/ / /twenty-megabytes-testing-results.html http://gavintech.blogspot.co.uk/ / /twenty-megabytes-testing-results.html fig. : overview of long-range attacks, the associated attack requirements, possible mitigations and our results. the term “pure longest chain rule” refers to a chain selection rule that considers the length of the blockchain as the sole criterion. a mitigation is classified as algorithmic if it prevents the attack by hardening the protocol without weakening the model; it is model-restricting if it strengthens the model assumptions so as to put the attack outside of the model or to otherwise restrict the execution environment in a significant way that is incongruent with the intended operational setting of decentralised blockchain protocols such as bitcoin. note that algorand is a blockwise-ba protocol, the other depicted protocols are of the eventual-consensus type. we include all pos protocols for which a sufficiently detailed whitepaper exists (specifically, ppcoin [kn ], nxt [com ], algorand [mic ], ouroboros [krdo ], snow white [bps ], and ouroboros praos [dgkr ]). others such as casper and bitshares are not sufficiently well documented to include in the comparison.∗ ∗ specifically, the description of casper [bg ] merely provides a “finality” layer on top of a non-specified pos system; regarding bitshares [sl ], the whitepaper for distributed consensus to be found in http://docs.bitshares.org/bitshares/papers/index.html is not available (checked march th, ) transactions, which are provided as inputs to the parties. in particular, in each round of the protocol, the environment may provide each party with a collection of transactions; these have the form (u,u′,s) which calls for the transfer of s stake from party u to party u′. (for our attack, it suffices to consider an environment as simply a fixed schedule of transactions delivered to the parties. note that a typical blockchain security model would imbue the environment with significant further powers: an information channel to the adversary, adaptive choice of transactions, scheduling of message deliveries, etc.) finally, given an initialization functionality inits and an environment z, an execution of a protocol consists of the genesis block b , the secret keys of the parties, the sequence of transactions delivered to the players by the environment, and the entire sequence of messages broadcast by the players. b. blockchains, ledgers and proof-of-stake protocols a) transaction ledger properties: a blockchain is a data structure which associates with each time slot (at most) one block. individual blocks consist of a collection of transactions, in addition to protocol-specific metadata. in the context of the model described above, we assume that the genesis block b appears as a default initial block in any blockchain, associated with time . a chain also immediately induces a stake distribution, sc, given by applying the transactions in the chain to the stake distribution of the genesis block. for a blockchain c, we let cb` denote the prefix of c obtained by removing the last ` blocks. intuitively, a blockchain protocol Π permits a collection of parties to collectively maintain a common ledger. we will focus on protocols that, in fact, maintain an individual ledger for each party (at each point of time); the notion of a common ledger is guaranteed by appropriate persistence and liveness properties of the protocol Π: persistence. once a node of the system proclaims a certain transaction tx as stable, the remaining honest nodes, if queried, will either report tx in the same position in the ledger or will not report as stable any transaction in conflict to tx. here the notion of stability is a predicate that is parameterized by a security parameter k; specifically, a transaction is declared stable (by a party with chain c) if and only if it appears in cbk. liveness. if all honest nodes in the system attempt to include a certain transaction, then after the passing of time corre- sponding to u slots (called the transaction confirmation time), all nodes, if queried and responding honestly, will report the transaction as stable. intuitively speaking, a secure blockchain protocol Π guaran- tees that these properties are possessed by the ledgers (recorded in the blockchains) held by the honest parties, under appropriate constraints on the adversary a. b) chain selection rules; the longest chain rule: we focus our attention on protocols defined by a chain selection rule: each step of the protocol calls for certain players to broadcast a blockchain; players then apply a selection rule which may result in replacing their local chain with one of the broadcast chains. we focus on the “longest chain rule”: broadcast blockchains are checked for validity—a protocol- dependent property—following which the longest valid chain, including the one held by the player, is adopted. (length is simply the number of blocks; for concreteness, we assume ties are broken lexicographically.) c) proof-of-stake protocols: we focus on ledgers that are maintained via a proof-of-stake protocol Π, which confers the right to extend a chain to a party u with probability proportional to the party’s stake in (a prefix of) the chain. stake-proportional growth. the probability of a party being allowed to extend a given chain c (an event denoted as extendopportunityΠ) is proportional to the stake under control of this party according to scb` , the stake distribution induced from cb`. here ` is a protocol-specific parameter typically related to the security parameter k discussed above. we are intentionally vague about the details of the proba- bility space in the description above, as this depends on the details of the underlying proof-of-stake protocol. additionally, we ignore the issue of “persistence depth” in theorem below, simply setting ` = . accounting for this would change the conclusion by an additive ` factor. d) relative stake and honest majority: as a matter of notation, for a set of parties x and a stake distribution s, we denote by s(x) the stake held by the parties in x . at a particular moment in the execution of a blockchain protocol (often understood from the context), we let αx ∈ [ , ] denote the relative stake of the parties in x . specifically, this is the quantity sc(x)/sc(u) where c is the chain held by the honest users. (note that due to the broadcast assumption, all honest players hold the same longest valid chain in each slot.) we say that an execution of Π has an honest majority if αa < / at every step of the protocol. e) block rewards and transaction fees: most blockchain protocols involve some form of block rewards and transaction fees. to be able to make generic statements about all the considered protocols, let us introduce the following notation: feesΠ(e, i) denotes the total fees (as a fraction of total stake) of all new transactions that were created by z in the slot i of execution e. rewardsΠ(c, i) denotes the total amount of coins that were created by the protocol Π and given to the party creating the block in the blockchain c in slot i. transfersx→y(c, i) denotes the total amount transferred from parties in x to parties in y on the blockchain c in slot i. iii. the stake-bleeding attack a. attack description we first informally describe how our attack operates in the context of a generic proof-of-stake blockchain defined by a protocol Π. to simplify the presentation, we assume throughout that the attacker controls some moderate proportion of stake αa < / . the adversary a simulates the honest protocol Π and maintains a local copy of the current blockchain (denoted c) as prescribed by this protocol. additionally, it also maintains an alternative blockchain ĉ that is initially empty and is kept hidden from honest parties. the adversary checks in every time slot whether it is allowed to extend the chain c or ĉ according to the rules of the protocol Π. it skips all opportunities to extend c, hence not contributing to its growth at all. on the other hand, whenever an opportunity to extend ĉ arises, a extends ĉ with a new block, and inserts into this new block all the transactions from the honest chain ĉ that are not yet included in ĉ and are valid in the context of ĉ (or as many of them as allowed by the rules of Π). this entitles a to receive (on ĉ) any block-creation reward and any transaction fees coming from the included transactions. as the protocol progresses, with overwhelming probability both c and ĉ will be growing, with c growing more quickly. while the relative stake of a on c will possibly be decreasing due to the block-creation rewards granted to block creators in c, its relative stake on the chain ĉ will be growing both due to the block rewards and transaction fees. under some realistic assumptions on the relative sizes of the transaction fees and block rewards (that are spelled out in section iii-b), the adversarial relative stake in ĉ will eventually exceed the honest relative stake in c. from this point on, the chain ĉ grows faster (in expectation) than the chain c and eventually becomes longer. if Π uses the plain longest-chain rule that rejects blocks in the future, a can now easily violate the persistence of the ledger by publishing ĉ, which will be adopted by all honest parties following Π. moreover, if a adds a transaction to the end of ĉ just before publishing it in which it transfers enough stake to honest parties to no longer control the majority, it will not violate the “honest majority” assumption as described in section ii-b. a more concise description of the adversary that executes our attack is given in figure . the description uses a generic extendopportunityΠ(c) predicate that is true whenever a is allowed to extend a given chain c according to the rules of Π. additionally, length(c) denotes the length of the chain c from the perspective of the adversary. b. attack analysis the proof-of-stake protocol Π has to satisfy several prop- erties in order to be susceptible to the attack described in the adversary a maintains its view of the public chain c according to Π and its own, private chain ĉ; both initially empty. a follows Π with the following exceptions: • upon extendopportunityΠ(c): do nothing. • upon extendopportunityΠ(ĉ): extend ĉ with a new block containing all transactions from c that are not yet in ĉ and do not compromise the validity of ĉ according to Π. keep ĉ private. • upon length(ĉ) > length(c): transfer stake majority in ĉ to h. publish ĉ according to Π. adversary a fig. : adversary a against an eventual-consensus proof-of- stake protocol Π. section iii-a. the main requirements are: (i) no frequent checkpoints. the protocol Π must operate according to the longest-chain rule: out of all valid chains seen by the honest parties, Π prescribes them to adopt the longest one. while some deviations from this requirement are possible, Π must necessarily allow reorganizations long into the past: if a maximum depth of a reorganization is specified and small (i.e., an honest party is not allowed by Π to change its view of the main chain more than several blocks (or slots) into the past even if there was an otherwise-preferable candidate chain), then the attack is not applicable. (ii) transaction fees. the protocol Π has to involve transac- tion fees, or more broadly, any transfers of coins from transacting parties to the parties maintaining the ledger. in greater detail, the attack only succeeds if ĉ eventually grows faster than c. since the growth speed of c (resp. ĉ) is proportional to the relative stake of honest parties in c (resp. of the adversary in ĉ), we need that the latter eventually exceeds the former. observe that the relative stake of the adversary in ĉ is increased in every slot i when it creates a block by: • the reward for this block rewardsΠ(ĉ, i); • all transfers from honest to adversarial parties transfersh→a(ĉ, i); • all the fees i∑ j=i′+ feesΠ(e,j) for all slots j ≤ i that followed after the slot i′ containing the previous block in ĉ. on the other hand, the relative stake of the honest parties in c is increased in every slot when a block is created in c by rewardsΠ(c, i) (not by any feesΠ, as all fees in c are paid by honest parties); and decreased by transfersh→a(c, i). we assume transfersa→h(c, i) = transfersa→h(ĉ, i) = . (iii) context-oblivious transactions. the valid transactions produced according to Π need to be oblivious to the context in which they are to be used within the blockchain: Π must allow a to take transactions from c and use them in the different context of ĉ. (iv) validity of low-growth chains. the protocol Π has to support “sleepy majority” to make sure that the chain ĉ, which is being extended only by a minority of stakeholders (and hence exhibits small chain growth at its beginning), is still considered valid according to the rules of Π. in the following theorem, we give an estimate of the number of slots that are needed to perform our attack, as a function of the initial adversarial stake αa and the amount of fees that are created in transactions in each slot. for the sake of simplicity, we analyze the case / < αa < / even though the attack works for any constant αa > (see the remarks at the end of the section for the explicit bound). theorem . let Π be a proof-of-stake blockchain protocol with stake-proportional growth satisfying the conditions (i)- (iv) above. consider an execution of the protocol Π with the adversary a given in figure . assume that transfersh→a(c, i) = , rewardsΠ(c ′, i) = , and feesΠ(e, i) ≥ f are satisfied in execution e for both c′ ∈ {c,ĉ} and all i > . let / < αa < / denote the initial relative stake of the adversary a. let t denote the slot in which length(ĉ) > length(c) occurs. then we have e[t ] ≤ − αa f and t will be tightly concentrated around its expectation. proof: let αc ′ p [i] for p ∈ {a,h}, c ′ ∈ {c,ĉ} and i > denote the relative stake of the set of players p in chain c′ in slot i (recall that a and h denote the adversary and the honest parties, respectively). additionally, let lengthi(c ′) denote the length of the chain c′ in slot i from the perspective of the adversary. then the inequality e[lengtht (ĉ)] > e[lengtht (c)] translates (due to the stake- proportional growth assumption) to t∑ i= αĉa[i] > t∑ i= αch[i] . ( ) since rewardsΠ(c′, i) = and the fees in c are all paid (and received) by honest parties, we have αh := αch[i] = −αa for all i > and hence t∑ i= αch[i] = t( −αa) . to lower-bound the sum on the left-hand side of ( ), define t (respectively t ) to be the minimum slot that satisfies αĉa[t ] ≥ αh (respectively α ĉ a[t ] ≥ αh −αa). since the relative stake αĉa grows by at least f per slot (as a includes all we commit a slight imprecision here by neglecting that the actual stake only grows after the transactions are included in a block, however this has no noticeable impact on our argument. transactions from c into ĉ), we get αa + (t − )f ≤ −αa (and similarly for t ), which gives us t ≤ − αa f + and t ≤ − αa f + . ( ) note now that αĉa[i] can be lower-bounded by αĉa[i] ≤   αa for i < t , −αa for t ≤ i < t , − αa for i ≥ t . therefore, ( ) will be satisfied for any t that satisfies αa(t − ) + ( −αa)(t −t ) + ( − αa)(t −t + ) > ( −αa)t . ( ) using ( ) and solving for t gives us the desired bound. the concentration follows from the fact that the length of both c and ĉ at some slot i are determined by a sum of independent random variables for each slot ≤ j ≤ i. we note that we weakened the statement of theorem in several ways in order to simplify the presentation of its proof. first, we focus on / < αa < / as otherwise the event defining t would never occur. nonetheless, it is easy to see that while our attack benefits from higher (sub- %) initial adversarial stake, it can be performed also with αa < / with a slightly modified analysis. second, theorem assumes zero block rewards and transfers from h to a. however, recall that transfersh→a(c, i) are completely controlled by the environment, subject only to the restrictions described in section ii-a. (this is to capture that the security of the blockchain protocol does not rely on any particular assumption regarding the transactions that are to be stored in the ledger; rather it must operate securely for any such sequence of transactions.) hence, for any rewardsΠ(c, i), a situation where the same analysis applies can be simply achieved by setting transfersh→a(c, i) so that the honest stake ratio in c remains constant (as the adversarial stake ratio in ĉ will keep increasing). on the other hand, non-zero rewardsΠ(ĉ, i) only make the attack succeed faster. finally, observe that we are quite pessimistic in the analysis, lower-bounding the values αĉa[i] as if they were not changing except in the slots t and t . by a more careful accounting one can obtain a better bound t ≈ ( − αa)/f. c. implications for existing pos protocols we now summarize to what extent are the preconditions described in section iii-b satisfied by various pos protocols, both those coming from academic literature and real-world deployments, implying that stake-bleeding would be a consid- eration for them. we focus primarily on eventual consensus protocols, nevertheless we make a note on the applicability of our attack concept to the blockwise-ba setting. all of the eventual consensus protocols employ some form of checkpoint- ing, presumably to prevent posterior corruption attacks; this note that we include only pos protocols for which a a sufficiently detailed whitepaper exists, cf. fig. . general countermeasure prevents the stake bleeding attack (and any other long-range attack) as well in a trivial (and model- restricting) manner. interestingly, if we remove checkpointing, all of the considered eventual consensus constructions would be susceptible to our attack, as they all satisfy the conditions (ii)- (iv) from section iii-b: they admit transaction fees, their transactions are context-oblivious, and low-growth chains are considered valid. in more detail we have the following. nxt and ppcoin. the nxt protocol only allows to reorga- nize the last blocks, hence forming a so-called moving checkpoint and violating condition (i) from section iii-b. a similar checkpointing mechanism is employed in ppcoin [kn ]. snow white. the snow white protocol [bps ] also uses moving checkpoints to prevent the posterior corruption attack, and would also be susceptible to the stake bleeding attack without it. ouroboros [krdo ] uses moving checkpoints as a part of its maxvalid chain-selection rule, neutralizing long- range attacks. without checkpointing, ouroboros would be susceptible to both posterior-corruption and stake-bleeding attacks, as it does not employ key-evolving cryptography. ouroboros praos [dgkr ] uses the same maxvalid chain- selection rule as ouroboros, imposing moving checkpoints. without this countermeasure, ouroboros praos would still neutralize posterior corruption attacks thanks to its use of key-evolving signatures for signing blocks; however, it would be susceptible to our stake bleeding attack. algorand [mic ], as already discussed, is not an eventual- consensus protocol, but rather follows the blockwise-ba approach. nonetheless, one can consider the applicability of the core idea of the stake bleeding attack to algorand, aiming for creating an alternative sequence of blocks and exploiting stake bleeding to gain temporary majority of stake there. however, in the case of algorand this attack is prevented by requiring a sufficient fraction of stakeholders to certify the outcome of each ba, which can be seen as violating the requirement (iv) in section iii-b. in fact, algorand enforces a strict participation rule and hence it can always find the correct protocol execution, in a model-restricting fashion. as already indicated, the above results imply that any attempt to remove model-restricting assumptions from these protocols needs to put into play, at minimum, some counter- measure against the stake bleeding attack. we discuss these in the final section. iv. mitigations a natural way to remedy our attack is to modify the protocol Π to violate at least one of the requirements given in section iii-b. doing this for requirements (i) or (ii) would lead to the trust assumption of a checkpointing service or to another model-restricting limitation bounding the transaction fees throughout the protocol execution to insignificant amounts. we hence rather focus on two alternative, algorithmic mitigations, aiming to violate the requirements (iii) and (iv). a) minimum chain density in the time-domain: a first observation that can be used to mitigate a stake bleeding attack is that a blockchain that is produced by the attack of section iii-a has a period over which the density of the blockchain is rather sparse. we clarify the concept of density next: in all pos protocols, it is allowed that some of the parties may not be online all the time (despite the fact that they are elected to participate in the protocol). the absence of their participation is something that can be detected by observing the blockchain. for instance, in the case of ouroboros [krdo ], there will be a number of “slots” that are left empty without a corresponding block; analogous observable quantities exist in the other protocols as well. this allows the protocol to detect and weed out blockchains with this deficiency that distinguishes them from the correct blockchain produced by honest parties. we do not pursue this direction further here. b) context sensitive transactions: a fundamental feature of a stake bleeding attack is taking a transaction “out of context” so to speak, i.e., copying it from the honestly maintained blockchain to the private blockchain maintained by the attacker. a very simple and effective way to prevent this from happening is to include “context”, i.e., the hash of a recent block, into each transaction. this idea has been discussed in the pos setting at least as early as larimer’s work in [lar ] (see also [lar ]) who introduced it to ensure that an attacker’s secret chain cannot take advantage of honest parties’ transactions to increase the total “coin-age“ value of the secret-chain they maintain (as “coin-age-destroyed” was a suggested mechanism for pos that was proposed there). here we use it for a different objective: to prevent transaction fees to “bleed” to the malicious parties over a period of time in a private chain. using context sensitivity, the validity of a transaction would require the presence of that hash in the blockchain. this would only allow adversarially generated transactions to be transferable to the private blockchain, hence completely neutralizing the attack (as there would be no “bleeding” of honest stake anymore in the private blockchain). we remark that a seemingly similar mitigation has been proposed in [bps ], section . . , to resolve a different issue where an attacker attempts to fork the blockchain in order to collect transaction fees that have been issued in the last few blocks produced by the honest participants. the mitigation suggested requires the transaction to include a recent block index and is insufficient to protect against stake-bleeding. in contrast, context-sensitivity as defined in this section requires the transaction to include the hash of a recent block. v. conclusions we have presented a new class of long-range attacks, called stake-bleeding attacks, that are applicable to all investigated eventual-consensus pos protocols when operated without any model-restricting assumptions. a stake-bleeding attack would require years of blockchain history to be successful given the current statistical profile of cryptocurrencies and hence they are not of immediate concern. nevertheless, they point to an important design consideration from the cryptographic perspective. they show how it is possible to mount a long- range attack without relying on posterior corruptions, in fact without exploiting adaptivity of corruptions whatsoever. from this it is also easily inferred that key-evolving cryptography by itself is not a sufficient mitigation for long-range attacks and it is important to investigate additional algorithmic mitigations that thwart long rage attacks in trustless, permissionless environments without the resorting to checkpointing or other model-restricting assumptions. acknowledgment. aggelos kiayias was partly supported by horizon research and innovation programme, project priviledge, under grant agreement no . alexander russell was partly supported by the national science foundation under grant no. . references [bg ] vitalik buterin and virgil griffith. casper the friendly finality gadget. corr, abs/ . , . [bgm ] iddo bentov, ariel gabizon, and alex mizrahi. cryptocurrencies without proof of work. corr, abs/ . , . [bps ] iddo bentov, rafael pass, and elaine shi. snow white: provably secure proofs of stake. cryptology eprint archive, report / , . http://eprint.iacr.org/ / . [but ] vitalik buterin. long-range attacks: the serious problem with adaptive proof of work. https://blog.ethereum.org/ / / / long-range-attacks-the-serious-problem-with-adaptive-proof-of-work/, . [com ] the nxt community. nxt whitepaper. https://bravenewcoin. com/assets/whitepapers/nxtwhitepaper-v -rev .pdf, july . [dgkr ] bernardo david, peter gaži, aggelos kiayias, and alexander rus- sell. ouroboros praos: an adaptively-secure, semi-synchronous proof-of-stake protocol. cryptology eprint archive, report / , . http://eprint.iacr.org/ / . to appear at eurocrypt . [fra ] matt franklin. a survey of key evolving cryptosystems. int. j. security and networks, ( / ), . [gkl ] juan a. garay, aggelos kiayias, and nikos leonardos. the bitcoin backbone protocol: analysis and applications. in elisabeth oswald and marc fischlin, editors, eurocrypt , part ii, volume of lncs, pages – . springer, heidelberg, april . updated version at http://eprint.iacr.org/ / . [kn ] sunny king and scott nadal. ppcoin: peer- to-peer crypto-currency with proof-of-stake. https://peercoin.net/assets/paper/peercoin-paper.pdf, august . [krdo ] aggelos kiayias, alexander russell, bernardo david, and ro- man oliynykov. ouroboros: a provably secure proof-of-stake blockchain protocol. in jonathan katz and hovav shacham, editors, crypto , part i, volume of lncs, pages – . springer, heidelberg, august . [lar ] dan larimer. transactions as proof-of-stake. https://bravenewcoin. com/assets/uploads/transactionsasproofofstake .pdf, novem- ber . [lar ] dan larimer. delegated proof-of-stake consensus. https:// bitshares.org/technology/delegated-proof-of-stake-consensus/, ac- cessed . . , . [mic ] silvio micali. algorand: the efficient and democratic ledger. corr, abs/ . , . [poe ] andrew poelstra. distributed consensus from proof of stake is impossible. https://download.wpsoftware.net/bitcoin/old-pos.pdf, may . [poe ] andrew poelstra. on stake and consensus. https://download.wpsoftware.net/bitcoin/pos.pdf, march . [pss ] rafael pass, lior seeman, and abhi shelat. analysis of the blockchain protocol in asynchronous networks. in jean-sébastien coron and jesper buus nielsen, editors, eurocrypt , part ii, volume of lncs, pages – . springer, heidelberg, may . [sl ] fabian schuh and daniel larimer. bitshares . : gen- eral overview. https://bravenewcoin.com/assets/whitepapers/ bitshares-general.pdf, december . http://eprint.iacr.org/ / https://blog.ethereum.org/ / / /long-range-attacks-the-serious-problem-with-adaptive-proof-of-work/ https://blog.ethereum.org/ / / /long-range-attacks-the-serious-problem-with-adaptive-proof-of-work/ https://bravenewcoin.com/assets/whitepapers/nxtwhitepaper-v -rev .pdf https://bravenewcoin.com/assets/whitepapers/nxtwhitepaper-v -rev .pdf http://eprint.iacr.org/ / http://eprint.iacr.org/ / https://bravenewcoin.com/assets/uploads/transactionsasproofofstake .pdf https://bravenewcoin.com/assets/uploads/transactionsasproofofstake .pdf https://bitshares.org/technology/delegated-proof-of-stake-consensus/ https://bitshares.org/technology/delegated-proof-of-stake-consensus/ https://bravenewcoin.com/assets/whitepapers/bitshares-general.pdf https://bravenewcoin.com/assets/whitepapers/bitshares-general.pdf introduction our results preliminaries the computational model blockchains, ledgers and proof-of-stake protocols the stake-bleeding attack attack description attack analysis implications for existing pos protocols mitigations conclusions references github - roidrage/lograge: an attempt to tame rails' default policy to log everything. skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} roidrage / lograge notifications star k fork an attempt to tame rails' default policy to log everything. www.paperplanes.de/ / / /on-notifications-logsubscribers-and-bringing-sanity-to-rails-logging.html mit license k stars forks star notifications code issues pull requests actions projects wiki security insights more code issues pull requests actions projects wiki security insights master switch branches/tags branches tags could not load branches nothing to show {{ refname }} default view all branches could not load tags nothing to show {{ refname }} default view all tags branches tags code clone https github cli use git or checkout with svn using the web url. work fast with our official cli. learn more. open with github desktop download zip launching github desktop if nothing happens, download github desktop and try again. go back launching github desktop if nothing happens, download github desktop and try again. go back launching xcode if nothing happens, download xcode and try again. go back launching visual studio code your codespace will open once ready. there was a problem preparing your codespace, please try again. latest commit benlovell release . . … eab jun , release . . eab git stats commits files permalink failed to load latest commit information. type name latest commit message commit time gemfiles ci: add rails . .x to build matrix jun , lib release . . jun , spec add support for action cable (# ) apr , tools fixes console for local development dec , .gitignore test with ap . and . on travis aug , .rspec centralized require spec_helper in .rspec may , .rubocop.yml bump rubocop and fix new violations dec , .travis.yml ci: jruby- . . . (# ) apr , changelog.md release . . jun , contributing.md add contribution guide feb , gemfile fix bundler warning on install jun , license.txt update license year may , readme.md add action cable section to readme (# ) may , rakefile make rake ci with rspec and rubocop pass sep , lograge.gemspec add license.txt to allow for easier gem auditing (# ) jan , view code lograge - taming rails' default request logging installation internals action cable what it doesn't do faq logging errors / exceptions handle actioncontroller::routingerror contributing license readme.md lograge - taming rails' default request logging lograge is an attempt to bring sanity to rails' noisy and unusable, unparsable and, in the context of running multiple processes and servers, unreadable default logging output. rails' default approach to log everything is great during development, it's terrible when running it in production. it pretty much renders rails logs useless to me. lograge is a work in progress. i appreciate constructive feedback and criticism. my main goal is to improve rails' logging and to show people that they don't need to stick with its defaults anymore if they don't want to. instead of trying solving the problem of having multiple lines per request by switching rails' logger for something that outputs syslog lines or adds a request token, lograge replaces rails' request logging entirely, reducing the output per request to a single line with all the important information, removing all that clutter rails likes to include and that gets mingled up so nicely when multiple processes dump their output into a single file. instead of having an unparsable amount of logging output like this: started get "/" for . . . at - - : : + processing by homecontroller#index as html rendered text template within layouts/application ( . ms) rendered layouts/_assets.html.erb ( . ms) rendered layouts/_top.html.erb ( . ms) rendered layouts/_about.html.erb ( . ms) rendered layouts/_google_analytics.html.erb ( . ms) completed ok in ms (views: . ms | activerecord: . ms) you get a single line with all the important information, like this: method=get path=/jobs/ .json format=json controller=jobscontroller action=show status= duration= . view= . db= . the second line is easy to grasp with a single glance and still includes all the relevant information as simple key-value pairs. the syntax is heavily inspired by the log output of the heroku router. it doesn't include any timestamp by default, instead it assumes you use a proper log formatter instead. installation in your gemfile gem "lograge" enable it in an initializer or the relevant environment config: # config/initializers/lograge.rb # or # config/environments/production.rb rails.application.configure do config.lograge.enabled = true end if you're using rails 's api-only mode and inherit from actioncontroller::api, you must define it as the controller base class which lograge will patch: # config/initializers/lograge.rb rails.application.configure do config.lograge.base_controller_class = 'actioncontroller::api' end if you use multiple base controller classes in your application, specify an array: # config/initializers/lograge.rb rails.application.configure do config.lograge.base_controller_class = ['actioncontroller::api', 'actioncontroller::base'] end you can also add a hook for own custom data # config/environments/staging.rb rails.application.configure do config.lograge.enabled = true # custom_options can be a lambda or hash # if it's a lambda then it must return a hash config.lograge.custom_options = lambda do |event| # capture some specific timing values you are interested in {:name => "value", :timing => some_float.round( ), :host => event.payload[:host]} end end or you can add a timestamp: rails.application.configure do config.lograge.enabled = true # add time to lograge config.lograge.custom_options = lambda do |event| { time: time.now } end end you can also keep the original (and verbose) rails logger by following this configuration: rails.application.configure do config.lograge.keep_original_rails_log = true config.lograge.logger = activesupport::logger.new "#{rails.root}/log/lograge_#{rails.env}.log" end you can then add custom variables to the event to be used in custom_options (available via the event.payload hash, which has to be processed in custom_options method to be included in log output, see above): # app/controllers/application_controller.rb class applicationcontroller < actioncontroller::base def append_info_to_payload(payload) super payload[:host] = request.host end end alternatively, you can add a hook for accessing controller methods directly (e.g. request and current_user). this hash is merged into the log data automatically. rails.application.configure do config.lograge.enabled = true config.lograge.custom_payload do |controller| { host: controller.request.host, user_id: controller.current_user.try(:id) } end end to further clean up your logging, you can also tell lograge to skip log messages meeting given criteria. you can skip log messages generated from certain controller actions, or you can write a custom handler to skip messages based on data in the log event: # config/environments/production.rb rails.application.configure do config.lograge.enabled = true config.lograge.ignore_actions = ['homecontroller#index', 'acontroller#an_action'] config.lograge.ignore_custom = lambda do |event| # return true here if you want to ignore based on the event end end lograge supports multiple output formats. the most common is the default lograge key-value format described above. alternatively, you can also generate json logs in the json_event format used by logstash. # config/environments/production.rb rails.application.configure do config.lograge.formatter = lograge::formatters::logstash.new end note: when using the logstash output, you need to add the additional gem logstash-event. you can simply add it to your gemfile like this gem "logstash-event" done. the available formatters are: lograge::formatters::lines.new lograge::formatters::cee.new lograge::formatters::graylog .new lograge::formatters::keyvalue.new # default lograge format lograge::formatters::json.new lograge::formatters::logstash.new lograge::formatters::ltsv.new lograge::formatters::raw.new # returns a ruby hash object in addition to the formatters, you can manipulate the data yourself by passing an object which responds to #call: # config/environments/production.rb rails.application.configure do config.lograge.formatter = ->(data) { "called #{data[:controller]}" } # data is a ruby hash end internals thanks to the notification system that was introduced in rails , replacing the logging is easy. lograge unhooks all subscriptions from actioncontroller::logsubscriber and actionview::logsubscriber, and hooks in its own log subscription, but only listening for two events: process_action and redirect_to (in case of standard controller logs). it makes sure that only subscriptions from those two classes are removed. if you happened to hook in your own, they'll be safe. unfortunately, when a redirect is triggered by your application's code, actioncontroller fires two events. one for the redirect itself, and another one when the request is finished. unfortunately the final event doesn't include the redirect, so lograge stores the redirect url as a thread-local attribute and refers to it in process_action. the event itself contains most of the relevant information to build up the log line, including view processing and database access times. while the logsubscribers encapsulate most logging pretty nicely, there are still two lines that show up no matter what. the first line that's output for every rails request, you know, this one: started get "/" for . . . at - - : : + and the verbose output coming from rack-cache: cache: [get /] miss both are independent of the logsubscribers, and both need to be shut up using different means. for the first one, the starting line of every rails request log, lograge replaces code in rails::rack::logger to remove that particular log line. it's not great, but it's just another unnecessary output and would still clutter the log files. maybe a future version of rails will make this log line an event as well. to remove rack-cache's output (which is only enabled if caching in rails is enabled), lograge disables verbosity for rack-cache, which is unfortunately enabled by default. there, a single line per request. beautiful. action cable starting with version . . , lograge introduced support for action cable logs. this proved to be a particular challenge since the framework code is littered with multiple (and seemingly random) logger calls in a number of internal classes. in order to deal with it, the default action cable logger was silenced. as a consequence, calling logger e.g. in user-defined connection or channel classes has no effect - rails.logger (or any other logger instance) has to be used instead. additionally, while standard controller logs rely on process_action and redirect_to instrumentations only, action cable messages are generated from multiple events: perform_action, subscribe, unsubscribe, connect, and disconnect. perform_action is the only one included in the actual action cable code and others have been added by monkey patching actioncable::channel::base and actioncable::connection::base classes. what it doesn't do lograge is opinionated, very opinionated. if the stuff below doesn't suit your needs, it may not be for you. lograge removes actionview logging, which also includes rendering times for partials. if you're into those, lograge is probably not for you. in my honest opinion, those rendering times don't belong in the log file, they should be collected in a system like new relic, librato metrics or some other metrics service that allows graphing rendering percentiles. i assume this for everything that represents a moving target. that kind of data is better off being visualized in graphs than dumped (and ignored) in a log file. lograge doesn't yet log the request parameters. this is something i'm actively contemplating, mainly because i want to find a good way to include them, a way that fits in with the general spirit of the log output generated by lograge. however, the payload does already contain the params hash, so you can easily add it in manually using custom_options: # production.rb yourapp::application.configure do config.lograge.enabled = true config.lograge.custom_options = lambda do |event| exceptions = %w(controller action format id) { params: event.payload[:params].except(*exceptions) } end end faq logging errors / exceptions our first recommendation is that you use exception tracking services built for purpose ;) if you absolutely must log exceptions in the single-line format, you can do something similar to this example: # config/environments/production.rb yourapp::application.configure do config.lograge.enabled = true config.lograge.custom_options = lambda do |event| { exception: event.payload[:exception], # ["exceptionclass", "the message"] exception_object: event.payload[:exception_object] # the exception instance } end end the :exception is just the basic class and message whereas the :exception_object is the actual exception instance. you can use both / either. be mindful when including this, you will probably want to cherry-pick particular attributes and almost definitely want to join the backtrace into something without newline characters. handle actioncontroller::routingerror add a get '*unmatched_route', to: 'application#route_not_found' rule to the end of your routes.rb then add a new controller action in your application_controller.rb. def route_not_found render 'error_pages/ ', status: :not_found end # contributing see the contributing.md file for further information. license mit. code extracted from travis ci. (c) mathias meyer see license.txt for details. about an attempt to tame rails' default policy to log everything. www.paperplanes.de/ / / /on-notifications-logsubscribers-and-bringing-sanity-to-rails-logging.html resources readme license mit license releases tags packages no packages published contributors + contributors languages ruby . % roff . % © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. erambler erambler home about series tags talks rdm resources a blog about research communication & higher education & open culture & technology & making & librarianship & stuff mxadm: a small cli matrix room admin tool date: - - tags: [rust] [matrix] [open source] [communication] i’ve enjoyed learning rust (the programming language) recently, but having only really used it for solving programming puzzles i’ve been looking for an excuse to use it for something more practical. at the same time, i’ve been using and learning about matrix (the chat/messaging platform), and running some small rooms there i’ve been a bit frustrated that some pretty common admin things don’t have a good user interface in any of the available clients. read more... comments are back date: - - tags: [meta] [matrix] [fediverse] i forgot to mention it at the time, but i’ve added “normal” comments back to the site, as you’ll see below and on most other pages. in place of the disqus comments i had before i’m now using cactus comments, which is open source and self-hostable (though i’m currently not doing that). if you’ve read my previous post about matrix self-hosting, you might be interested to know that cactus uses matrix rooms for data storage and synchronisation and i can moderate and reply to comments directly from my matrix client. read more... intro to the fediverse date: - - tags: [fediverse] [social media] [twitter] wow, it turns out to be years since i wrote this beginners guide to twitter. things have moved on a loooooong way since then. far from being the interesting, disruptive technology it was back then, twitter has become part of the mainstream, the establishment. almost everyone and everything is on twitter now, which has both pros and cons. so what’s the problem? it’s now possible to follow all sorts of useful information feeds, from live updates on transport delays to your favourite sports team’s play-by-play performance to an almost infinite number of cat pictures. read more... collaborations workshop : collaborative ideas & hackday date: - - series: collaborations workshop tags: [technology] [conference] [ssi] [research] [disability] [equality, diversity & inclusion] my last post covered the more “traditional” lectures-and-panel-sessions approach of the first half of the ssi collaborations workshop. the rest of the workshop was much more interactive, consisting of a discussion session, a collaborative ideas session, and a whole-day hackathon! the discussion session on day one had us choose a topic (from a list of topics proposed leading up to the workshop) and join a breakout room for that topic with the aim of producing a “speed blog” by then end of minutes. read more... collaborations workshop : talks & panel session date: - - series: collaborations workshop tags: [technology] [conference] [ssi] [research] [disability] [equality, diversity & inclusion] i’ve just finished attending (online) the three days of this year’s ssi collaborations workshop (cw for short), and once again it’s been a brilliant experience, as well as mentally exhausting, so i thought i’d better get a summary down while it’s still fresh it my mind. collaborations workshop is, as the name suggests, much more focused on facilitating collaborations than a typical conference, and has settled into a structure that starts off with with longer keynotes and lectures, and progressively gets more interactive culminating with a hack day on the third day. read more... date: - - tags: [meta] [design] i’ve decided to try switching this website back to using hugo to manage the content and generate the static html pages. i’ve been on the python-based nikola for a few years now, but recently i’ve been finding it quite slow, and very confusing to understand how to do certain things. i used hugo recently for the glam data science network website and found it had come on a lot since the last time i was using it, so i thought i’d give it another go, and redesign this site to be a bit more minimal at the same time. the theme is still a work in progress so it’ll probably look a bit rough around the edges for a while, but i think i’m happy enough to publish it now. when i get round to it i might publish some more detailed thoughts on the design. ideas for accessible communications date: - - tags: [stuff] [accessibility] [ablism] the disability support network at work recently ran a survey on “accessible communications”, to develop guidance on how to make communications (especially internal staff comms) more accessible to everyone. i grabbed a copy of my submission because i thought it would be useful to share more widely, so here it is. please note that these are based on my own experiences only. i am in no way suggesting that these are the only things you would need to do to ensure your communications are fully accessible. read more... matrix self-hosting date: - - tags: [technology] [matrix] [communication] [self-hosting] [dweb] i started running my own matrix server a little while ago. matrix is something rather cool, a chat system similar to irc or slack, but open and federated. open in that the standard is available for anyone to view, but also the reference implementations of server and client are open source, along with many other clients and a couple of nascent alternative servers. federated in that, like email, it doesn’t matter what server you sign up with, you can talk to users on your own or any other server. read more... what do you miss least about pre-lockdown life? date: - - tags: [stuff] [reflection] [pandemic] @janethughes on twitter: what do you miss the least from pre-lockdown life? i absolutely do not miss wandering around the office looking for a meeting room for a confidential call or if i hadn’t managed to book a room in advance. let’s never return to that joyless frustration, hey? : am · feb , after seeing terence eden taking janet hughes' tweet from earlier this month as a writing prompt, i thought i might do the same. read more... remarkable blogging date: - - tags: [technology] [writing] [gadgets] and the handwritten blog saga continues, as i’ve just received my new remarkable tablet, which is designed for reading, writing and nothing else. it uses a super-responsive e-ink display and writing on it with a stylus is a dream. it has a slightly rough texture with just a bit of friction that makes my writing come out a lot more legibly than on a slippery glass touchscreen. if that was all there was to it, i might not have wasted my money, but it turns out that it runs on linux and the makers have wisely decided not to lock it down but to give you full root mess. read more... of next page me elsewhere :: keyoxide | keybase | mastodon | matrix | twitter | github | gitlab | orcid | pypi | linkedin © jez cope | built by: hugo | theme: mnemosyne build status: except where noted, this work is licensed under a creative commons attribution . international license. pinboard (items tagged code lib) https://pinboard.in/t:code lib/ delivering fast and light applications with save-data - - t : : + : https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/save-data jasonclark if ("connection" in navigator) { if (navigator.connection.savedata === true) { // implement data saving operations here. } } _todo performance code lib article energy green web green-digital-libraries javascript https://pinboard.in/ https://pinboard.in/u:jasonclark/b:d cbb baabf / how web content can affect power usage | webkit - - t : : + : https://webkit.org/blog/ /how-web-content-can-affect-power-usage/ jasonclark performance code lib article energy green web green-digital-libraries javascript https://pinboard.in/ https://pinboard.in/u:jasonclark/b:e a e f/ anelki.net | /home/anelki - - t : : + : https://anelki.net/ edsu
    personal reflections in challenging times.
    code lib people https://pinboard.in/u:edsu/b: e aabf b / code lib - youtube - - t : : + : https://www.youtube.com/channel/ucnmkmdfo-ul-iabntouzh-q/videos esperr code lib conferences https://pinboard.in/ https://pinboard.in/u:esperr/b:cb d da/ ( ) https://twitter.com/rudokemper/status/ /photo/ - - t : : + : https://twitter.com/rudokemper/status/ /photo/ bsscdt rt @rudokemper: floored and honored to have been invited to give a keynote for the #c l #code lib conference next monday. i can't wait to share about our work building open-source tech for communities to map oral histories, and how my journey started in the library + archive space! @code lib c l code lib https://twitter.com/ https://pinboard.in/u:bsscdt/b: a fefac / untitled (https://d keuthy s c .cloudfront.net/static/ems/upload/files/code lib _discogs_blacklight.pdf) - - t : : + : https://d keuthy s c .cloudfront.net/static/ems/upload/files/code lib _discogs_blacklight.pdf rybesh rt @sf : really happy to share, “dynamic integration of discogs data within a blacklight catalog” from now on i’m going to ask myself, “can this talk be a poster?” #code lib code lib https://twitter.com/ https://pinboard.in/u:rybesh/b: d f f/ the code lib journal – advancing arks in the historical ontology space - - t : : + : https://journal.code lib.org/articles/ geephroh code lib digitallibraries digitalpreservation data ontology identifiers digitalhumanities ark computationalarchivalscience cas archives journalarticle https://pinboard.in/ https://pinboard.in/u:geephroh/b: e caf / the code lib journal – managing an institutional repository workflow with gitlab and a folder-based deposit system - - t : : + : https://journal.code lib.org/articles/ aarontay managing an institutional repository workflow with gitlab and a folder-based deposit system by whitney r. johnson-freeman, @vphill, and kristy k. phillips #code lib journal issue . code lib https://twitter.com/ https://pinboard.in/u:aarontay/b: dfc c cda/ listserv . - code lib archives - - t : : + : https://lists.clir.org/cgi-bin/wa?a =code lib;e bc . miaridge rt @kiru: i forgot to post the call earlier: the code lib journal () is looking for volunteers to join its editorial committee. deadline: oct. #code lib code lib https://twitter.com/ https://pinboard.in/u:miaridge/b:e e fb / - c l [ ] future role of libraries in researcher workflows - google slides - - t : : + : https://t.co/jcoe mvhd elibtronic research-lifecycle code lib publish scholarly-communication https://pinboard.in/u:elibtronic/b: b f a/ twitter - - t : : + : https://twitter.com/i/web/status/ aarontay new issue of the the #code lib journal published. some terrific looking papers, including a review of pids for heri… code lib https://twitter.com/ https://pinboard.in/u:aarontay/b: b b d/ ( ) https://journal.code lib.org/ - - t : : + : https://journal.code lib.org/ miaridge rt @kiru: i am very happy to announce the publication of the @code lib journal issue # : webscraping… code lib https://twitter.com/ https://pinboard.in/u:miaridge/b: f c d d c/ the code lib journal – column: we love open source software. no, you can’t have our code - - t : : + : https://journal.code lib.org/articles/ pfhyper librarians are among the strongest proponents of open source software. paradoxically, libraries are also among the least likely to actively contribute their code to open source projects. this article identifies and discusses six main reasons this dichotomy exists and offers ways to get around them. code lib library libt opensource finalproject https://pinboard.in/ https://pinboard.in/u:pfhyper/b: da d a b / the code lib journal – barriers to initiation of open source software projects in libraries - - t : : + : https://journal.code lib.org/articles/ pfhyper libraries share a number of core values with the open source software (oss) movement, suggesting there should be a natural tendency toward library participation in oss projects. however dale askey’s code lib column entitled “we love open source software. no, you can’t have our code,” claims that while libraries are strong proponents of oss, they are unlikely to actually contribute to oss projects. he identifies, but does not empirically substantiate, six barriers that he believes contribute to this apparent inconsistency. in this study we empirically investigate not only askey’s central claim but also the six barriers he proposes. in contrast to askey’s assertion, we find that initiation of and contribution to oss projects are, in fact, common practices in libraries. however, we also find that these practices are far from ubiquitous; as askey suggests, many libraries do have opportunities to initiate oss projects, but choose not to do so. further, we find support for only four of askey’s six oss barriers. thus, our results confirm many, but not all, of askey’s assertions. code lib library libt opensource finalproject https://pinboard.in/ https://pinboard.in/u:pfhyper/b: f d e / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink rt @kiru: the #code lib journal's issue ( / ) has been just published: . worldcat search api, go… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b:d cd f e / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink rt @mjingle: who's excited for the next #code lib conference?! it will be in pittsburgh, pa from march - . is your org interes… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b: defc eb / attempto project - - t : : + : http://attempto.ifi.uzh.ch/site/ blebo nlp basic cnl computationallinguistics controlledlanguage controlled_language code lib compsci english knowledgerepresentation https://pinboard.in/u:blebo/b: a b f a fd/ twitter - - t : : + : https://twitter.com/i/web/status/ danbri when our grandchildren ask about the great #code lib irc battle of the tisane, we will serve them both tea and coff… code lib https://twitter.com/ https://pinboard.in/u:danbri/b: ce a e/ code lib recap – bloggers! - - t : : + : https://saaers.wordpress.com/ / / /code lib- -recap/ geephroh code lib digitallibraries research saa archives https://pinboard.in/ https://pinboard.in/u:geephroh/b: afd / digital technologies development librarian | nc state university libraries - - t : : + : https://www.lib.ncsu.edu/jobs/ehra/dtdl cdmorris we're hiring a digital technologies development librarian @ncsulibraries ! #job #libjobs #code lib #dlf #libtech dlf libtech code lib job libjobs https://twitter.com/ https://pinboard.in/u:cdmorris/b:cf e f / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink ) all the men who want to preserve the idea of a #code lib discussion space as one that's free of such topics as s… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b:d f / google refine cheat sheet (code lib) - - t : : + : https://code libtoronto.github.io/ - - -access/googlerefinecheatsheets.pdf psammead openrefine code lib how-to cheatsheet https://pinboard.in/ https://pinboard.in/u:psammead/b:d c d / untitled (https://www.youtube.com/watch?v=icblvnchpnw) - - t : : + : https://www.youtube.com/watch?v=icblvnchpnw cdmorris code lib southeast happening today! live stream starting at : am eastern. #code libse #code lib code libse code lib https://twitter.com/ https://pinboard.in/u:cdmorris/b:d cf c/ twitter - - t : : + : https://twitter.com/i/web/status/ lbjay it occurs to me the #code lib statement of support for chris bourg, , offers a better model… code lib https://twitter.com/ https://pinboard.in/u:lbjay/b:d d c f/ github - code lib/c l -keynote-statement: code lib community statement in support of chris bourg - - t : : + : https://github.com/code lib/c l -keynote-statement lbjay it occurs to me the #code lib statement of support for chris bourg, , offers a better model… code lib https://twitter.com/ https://pinboard.in/u:lbjay/b: b ef c / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink now that the #code lib discord is up & running, i'm contemplating leaving slack overall, with exception for plannin… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b:c d f ddd d/ ( ) https://twitter.com/palcilibraries/status/ /photo/ - - t : : + : https://twitter.com/palcilibraries/status/ /photo/ cdmorris talking privacy and ra at #c l with dave lacy from @templelibraries #code lib c l code lib https://twitter.com/ https://pinboard.in/u:cdmorris/b: f c c f / scope: an access interface for dips from archivematica - - t : : + : https://github.com/cca-public/dip-access-interface sdellis archives code lib https://pinboard.in/ https://pinboard.in/u:sdellis/b: ef d c / review, appraisal and triage of mail (ratom) - - t : : + : http://ratom.web.unc.edu/ sdellis archives code lib https://pinboard.in/ https://pinboard.in/u:sdellis/b: cdd / national web privacy forum - msu library | montana state university - - t : : + : http://www.lib.montana.edu/privacy-forum/ sdellis privacy analytics code lib https://pinboard.in/ https://pinboard.in/u:sdellis/b: b db e / the code lib journal - - t : : + : https://journal.code lib.org/ ratledge code lib library_technology journal journals_code lib https://pinboard.in/ https://pinboard.in/u:ratledge/b: a f c b / code lib | we are developers and technologists for libraries, museums, and archives who are dedicated to being a diverse and inclusive community, seeking to share ideas and build collaboration. - - t : : + : https://code lib.org/ ratledge code lib https://pinboard.in/ https://pinboard.in/u:ratledge/b: cfc ccb / twitter - - t : : + : https://twitter.com/i/web/status/ verwinv ne'er had the pleasure to attend #code lib myself ... but if you're thinking about it but can't afford to go - ther… code lib https://twitter.com/ https://pinboard.in/u:verwinv/b:f ceb/ twitter - - t : : + : https://twitter.com/justindlc/status/ /photo/ librariesval rt @justindlc: pre-conference meetup at ormsby's for code lib southeast ! #code libse #code lib code lib code libse https://twitter.com/ https://pinboard.in/u:librariesval/b: c ad b / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink thanks @lydia_zv @redlibrarian and jolene (are you on twitter, i can find you?) for a great #code lib day! it was… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b: faa e bad/ twitter - - t : : + : https://twitter.com/i/web/status/ jbfink my slides and speakers notes from #code lib #c ln on ursula franklin's "real world of technology" (which i really… code lib c ln https://twitter.com/ https://pinboard.in/u:jbfink/b:a ed a fc / twitter - - t : : + : https://twitter.com/i/web/status/ jbfink in an unfortunate timing, it appears the code lib wiki is down the first day of #code lib north - there's a cache o… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b: edcfb c/ twitter - - t : : + : https://twitter.com/i/web/status/ jbfink rt @kiru: just off the (word)press: the #code lib journal issue is available: . great articles writ… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b:db c bb a / the code lib journal - - t : : + : http://journal.code lib.org/ jbfink rt @kiru: just off the (word)press: the #code lib journal issue is available: . great articles writ… code lib https://twitter.com/ https://pinboard.in/u:jbfink/b: be / twitter - - t : : + : https://twitter.com/gitwishes/status/ lbjay this is all of #code lib working on @bot lib circa . code lib https://twitter.com/ https://pinboard.in/u:lbjay/b: e b b / twitter - - t : : + : https://twitter.com/gmcharlt/status/ danbri this is fabulous news for the cultural heritage open source world. big ups to @code lib and @clirdlf! #code lib code lib https://twitter.com/ https://pinboard.in/u:danbri/b: cbe ff f / twitter - - t : : + : https://twitter.com/i/web/status/ miaridge rt @achdotorg: we too co-sign the #code lib community statement in support of @mchris duke. we continue to admire an honor our col… code lib https://twitter.com/ https://pinboard.in/u:miaridge/b:cf f d e / code lib/c l -keynote-statement: code lib community statement in support of chris bourg - - t : : + : https://github.com/code lib/c l -keynote-statement jbfink code lib github https://pinboard.in/ https://pinboard.in/u:jbfink/b: b f bd / code lib community statement in support of chris bourg | c l -keynote-statement - - t : : + : https://code lib.github.io/c l -keynote-statement/ wragge rt @clirdlf: we’re proud to stand with the #code lib community in support of #c l keynoter @mchris duke: code lib c l https://twitter.com/ https://pinboard.in/u:wragge/b:d e b e / matthew reidsma : auditing algorithms - - t : : + : https://matthew.reidsrow.com/talks/ malantonio
    talks about libraries, technology, and the web by matthew reidsma.
    algorithms bias search libraries technology code lib code lib- https://pinboard.in/u:malantonio/b: dd c f / for the love of baby unicorns: my code lib keynote | feral librarian - - t : : + : https://chrisbourg.wordpress.com/ / / /for-the-love-of-baby-unicorns-my-code lib- -keynote/ petej code lib diversity technology libraries inclusion mansplaining https://pinboard.in/ https://pinboard.in/u:petej/b: d e f / jira for archives - google slides - - t : : + : https://docs.google.com/presentation/d/ uwywg -nt qjm-j haavsoh ikzucax efbnlcy /edit#slide=id.g a ccaec_ _ malantonio see https://youtu.be/ cno sernxi?t= h m s for presentation code lib code lib- libraries work-life https://pinboard.in/u:malantonio/b: fc b e / twitter - - t : : + : https://twitter.com/justin_littman/status/ /photo/ aarontay rt @justin_littman: peer review of my #code lib poster on "where to get twitter data for academic research." code lib https://twitter.com/ https://pinboard.in/u:aarontay/b:c c e d/ availability calendar - kalorama guest house - - t : : + : https://secure.rezovation.com/reservations/availabilitycalendar.aspx?s=ut fw wid skorasaurus kalorama guest house code lib https://pinboard.in/ https://pinboard.in/u:skorasaurus/b: f ea / ( ) https://twitter.com/i/web/status/ - - t : : + : https://twitter.com/i/web/status/ docdre rt @nowviskie: icymi: #code lib registration is open! @mmsubram & @mchris duke to keynote, reception in the great hall… code lib https://twitter.com/ https://pinboard.in/u:docdre/b: e f cb/ ( ) https://twitter.com/freethefiles/status/ /photo/ - - t : : + : https://twitter.com/freethefiles/status/ /photo/ verwinv yay! i'm presenting at #code lib. and i can say hello to walter forsberg, @hbmcd and @cristalyze! code lib https://twitter.com/ https://pinboard.in/u:verwinv/b: bf d / ( ) https://twitter.com/i/web/status/ - - t : : + : https://twitter.com/i/web/status/ verwinv registration for #code lib is now open! and its being held in #washingtondc where our #memorylab is - so come visit… washingtondc code lib memorylab https://twitter.com/ https://pinboard.in/u:verwinv/b: bc fa c/ code lib - washington, d.c. - - t : : + : http:// .code lib.org/ verwinv last day to vote #code lib program! don't forget 😓! code lib https://twitter.com/ https://pinboard.in/u:verwinv/b: efcaa db a / presentation voting survey - - t : : + : https://www.surveymonkey.com/r/c l -presentations verwinv vote #code lib proposals rather than the presenters. new anonymity feature! check it: got until / code lib https://twitter.com/ https://pinboard.in/u:verwinv/b: a e b / lodlam challenge winners - - t : : + : https://summit .lodlam.net/ / / /lodlam-challenge-winners/ miaridge rt @lodlam: #lodlam challenge prize winners congrats to dive+ (grand) & warsampo (open data) teams #dh #musetech #code lib dh musetech lodlam code lib https://twitter.com/ https://pinboard.in/u:miaridge/b:c bd / jobboard - - t : : + : https://jobs.code lib.org/ lbjay some heroes don't wear capes, y'all. back online and and better than ever thanks to @ryanwick and @_cb_ #code lib code lib https://twitter.com/ https://pinboard.in/u:lbjay/b:a f f b e/ digital technologies development librarian | ncsu libraries - - t : : + : https://www.lib.ncsu.edu/jobs/ehra/digital-technologies-development-librarian jbfink rt @ronallo: job opening: digital technologies development librarian @ncsulibraries #code lib #libtechwomen know someone? libtechwomen code lib https://twitter.com/ https://pinboard.in/u:jbfink/b: a bff fd/ who's using ipfs in libraries, archives and museums - communities / libraries, archives and museums - discuss.ipfs.io - - t : : + : https://discuss.ipfs.io/t/whos-using-ipfs-in-libraries-archives-and-museums/ sdellis career ipfs libraries code lib https://pinboard.in/ https://pinboard.in/u:sdellis/b:df f bc b/ scott w. h. young on twitter: "slides for my talk on participatory design with underrepresented populations. thank you, #c l :) https://t.co/rvs zdv u" - - t : : + : https://twitter.com/hei_scott/status/ brainwane refers to my code lib keynote on empathy & ux yay code lib https://pinboard.in/ https://pinboard.in/u:brainwane/b: c ef cde / twitter - - t : : + : https://twitter.com/i/web/status/ lbjay have not read the full report but based on the abstract seems useful to those involved in the #code lib incorporati… code lib https://twitter.com/ https://pinboard.in/u:lbjay/b: f f b b / resistanceisfertile - google drive - - t : : + : https://drive.google.com/drive/folders/ b ooqctdnhjmy wn zw htxc pmhswe code lib harlow keynote https://pinboard.in/u:pmhswe/b: c / resistanceisfertile - google drive - - t : : + : https://drive.google.com/drive/folders/ b ooqctdnhjmy wn zw htxc markpbaggett code lib harlow keynote https://pinboard.in/ https://pinboard.in/u:markpbaggett/b:cffeeb e e / google drive cms - - t : : + : https://www.drivecms.xyz/ jju webdev programming tech code lib https://pinboard.in/u:jju/b:f af e a a / code lib | docker presentation - google slides - - t : : + : https://docs.google.com/presentation/d/ p pr p dxikxjwe _sha-rsktax-hzquo-ffz-th /edit#slide=id.p markpbaggett code lib docker https://pinboard.in/ https://pinboard.in/u:markpbaggett/b:bd aec e/ best catalog results page ever - - t : : + : https://www.dropbox.com/s/jbxe jpbdck z/deibel-c l -best-ever.pptx markpbaggett code lib accessibility presentation https://pinboard.in/ https://pinboard.in/u:markpbaggett/b: f b fea a/ participatory user experience design with underrepresented populations: a model for disciplined empathy - - t : : + : http:// .code lib.org/talks/participatory-user-experience-design-with-underrepresented-populations-a-model-for-disciplined-empathy brainwane am honored & humbled to see #c l glad my talk/article was helpful! wish i were at #code lib to thank you in person c l code lib https://twitter.com/ https://pinboard.in/u:brainwane/b: bf ebd d d/ twitter - - t : : + : https://twitter.com/i/web/status/ bsscdt why don't you join us in the #libux slack? sign yourself up: #litaux #ux #code lib… ux libux litaux code lib https://twitter.com/ https://pinboard.in/u:bsscdt/b: f bd a / untitled (http://libux.co/slack?utm_content=buffer f &utm_medium=social&utm_source=twitter.com&utm_campaign=buffer) - - t : : + : http://libux.co/slack bsscdt why don't you join us in the #libux slack? sign yourself up: #litaux #ux #code lib… ux libux litaux code lib https://twitter.com/ https://pinboard.in/u:bsscdt/b: a bf / twitter - - t : : + : https://twitter.com/jschneider/status/ /photo/ jcarletonoh ten principles for user protection: #code lib #privacy #ischoolui ischoolui privacy code lib https://twitter.com/ https://pinboard.in/u:jcarletonoh/b: bf dea b/ technology in hostile states: ten principles for user protection | the tor blog - - t : : + : https://blog.torproject.org/blog/technology-hostile-states-ten-principles-user-protection jcarletonoh ten principles for user protection: #code lib #privacy #ischoolui ischoolui privacy code lib https://twitter.com/ https://pinboard.in/u:jcarletonoh/b: aebf a/ none none ndsa coordinating committee nominations ndsa coordinating committee nominations if you are interested in joining the coordinating committee or want to nominate another member, please complete this form with the following information by : pm edt friday, august , . ndsa is accepting nominations for three-year terms, starting in . * required nominee's name * your answer nominee's institution * your answer nominee's title your answer nominee's email * your answer nominee's website (if any) your answer brief nominee-approved bio/candidate statement (up to words) your answer additional statement/comments/nominator's name/contact info if not self your answer submit never submit passwords through google forms. this content is neither created nor endorsed by google. report abuse - terms of service - privacy policy  forms     go to hellman go to hellman if you wanna end war and stuff, you gotta sing loud! the ebook turns open access for backlist books, part ii: the all-stars open access for backlist books, part i: the slush pile creating value with open access books infra-infrastructure, inter-infrastructure and para-infrastructure we should regulate virality notes on work-from-home teams your identity, your library four-leaf clovers responding to critical reviews ra : technology is not the problem. ra doesn't address the yet-another-wayf problem. radical inclusiveness would. ra 's recommended technical approach is broken by emerging browser privacy features ra draft rp session timeout recommendation considered harmful ra rp does not require secure protocols. it should. fudge, and open access ebook download statistics on the surveillance techno-state towards impact-based oa funding a milestone for gitenberg ebook drm and blockchain play cryptokitty and mouse. and the winner is... my face is personally identifiable information the vast potential for blockchain in libraries the shocking truth about ra : it's made of people! choose privacy week: your library organization is watching you everything* you always wanted to know about voodoo (but were afraid to ask) none counting down to in the public domain | everybody's libraries everybody's libraries libraries for everyone, by everyone, shared with everyone, about everything skip to content home about about the free decimal correspondence free decimal correspondence ils services for discovery applications john mark ockerbloom the metadata challenge ← from our subjects to yours (and vice versa) public domain day : honoring a lost generation → counting down to in the public domain posted on december , by john mark ockerbloom we’re rapidly approaching another public domain day, the day at the start of the year when a year’s worth of creative work joins the public domain. this will be the third year in a row that the us will have a full crop of new public domain works (after a prior -year drought), and once again, i’m noting and celebrating works that will be entering the public domain shortly. approaching , i wrote a one-post-a-day advent calendar for works throughout the month of december, and approaching , i highlighted a few works, and related copyright issues, in a series of december posts called vision. this year i took to twitter, making one tweet per day featuring a different work and creator using the #publicdomaindaycountdown hashtag. tweets are shorter than blog posts, but i started days out, so by the time i finish the series at the end of december, i’ll have written short notices on more works than ever. since not everyone reads twitter, and there’s no guarantee that my tweets will always be accessible on that site, i’ll reproduce them here. (this post has been updated to include all the tweets up to , and in has been further updated to link to copies of some of the featured works.) the tweet links have been reformatted for the blog, and a few tweets have been recombined or otherwise edited. if you’d like to comment yourself on any of the works mentioned here, or suggest others i can feature, feel free to reply here or on twitter. (my account there is @jmarkockerbloom. you’ll also find some other people tweeting on the #publicdomaindaycountdown hashtag, and you’re welcome to join in as well.) september : it’s f. scott fitzgerald’s birthday. his best-known book, the great gatsby, joins the us public domain days from now, along with other works with active copyrights. #publicdomaindaycountdown (links to free online books by fitzgerald here.) september : c. k. scott-moncrieff’s birthday’s today. he translated proust’s remembrance of things past (a controversial title, as the public domain review notes). the guermantes way, his translation of proust’s rd volume, joins the us public domain in days. #publicdomaindaycountdown september : today is t.s. eliot’s birthday. his poem “the hollow men” (which ends “…not with a bang but a whimper”) was first published in full in , & joins the us public domain in days. #publicdomaindaycountdown more by & about him here. september : lady cynthia asquith, born today in , edited a number of anthologies that have long been read by children and fans of fantasy and supernatural fiction. her first major collection, the flying carpet, joins the us public domain in days. #publicdomaindaycountdown september : as @marketplace reported tonight, agatha christie’s mysteries remain popular after years. in days, her novel the secret of chimneys will join the us public domain, as will the expanded us poirot investigates collection. #publicdomaindaycountdown september : homer hockett’s and arthur schlesinger, sr.’s political and social history of the united states first came out in , and was an influential college textbook for years thereafter. the first edition joins the public domain in days. #publicdomaindaycountdown september : inez haynes gillmore irwin died years ago this month, after a varied, prolific writing career. this blog post looks at of her books, including gertrude haviland’s divorce, which joins the public domain in days. #publicdomaindaycountdown october : for some, spooky stories and themes aren’t just for october, but for the whole year. we’ll be welcoming a new year’s worth of weird tales to the public domain in months. see what’s coming, and what’s already free online, here. #publicdomaindaycountdown october : misinformation and quackery has been a threat to public health for a long time. in weeks, the book the patent medicine and the public health, by american quack-fighter arthur j. cramp joins the public domain. #publicdomaindaycountdown october : sophie treadwell, born this day in , was a feminist, modernist playwright with several plays produced on broadway, but many of her works are now hard to find. her play “many mansions” joins the public domain in days. #publicdomaindaycountdown october : it’s edward stratemeyer’s birthday. books of his syndicate joining the public domain in days include the debuts of don sturdy & the blythe girls, & further adventures of tom swift, ruth fielding, baseball joe, betty gordon, the bobbsey twins, & more. #publicdomaindaycountdown october : russell wilder was a pioneering diabetes doctor, testing newly invented insulin treatments that saved many patients’ lives. his book diabetes: its cause and its treatment with insulin joins the public domain in days. #publicdomaindaycountdown october : queer british catholic author radclyffe hall is best known for the well of loneliness. hall’s earlier novel a saturday life is lighter, though it has some similar themes in subtext. it joins the us public domain in days. #publicdomaindaycountdown october : edgar allan poe’s stories have long been public domain, but some work unpublished when he died (on this day in ) stayed in © much longer. in days, the valentine museum’s book of his previously unpublished letters finally goes public domain. #publicdomaindaycountdown october : in , the nobel prize in literature went to george bernard shaw. in days, his table-talk, published that year, will join the public domain in the us, and all his solo works published in his lifetime will be public domain nearly everywhere else. #publicdomaindaycountdown october : author and editor edward bok was born this day in . in twice thirty ( ), he follows up his pulitzer-winning memoir the americanization of edward bok with a set of essays from the perspective of his s. it joins the public domain in days. #publicdomaindaycountdown october : in the silent comedy “the freshman”, harold lloyd goes to tate university, “a large football stadium with a college attached”, and goes from tackling dummy to unlikely football hero. it joins the public domain in days. #publicdomaindaycountdown october : it’s françois mauriac’s birthday. his le desert de l’amour, a novel that won the grand prix of the académie française, joins the us public domain in days. published translations may stay copyrighted, but americans will be free to make new ones. #publicdomaindaycountdown october : pulitzer-winning legal scholar charles warren’s congress, the constitution, and the supreme court ( ) analyzes controversies, some still argued, over relations between the us legislature and the us judiciary. it joins the public domain in days. #publicdomaindaycountdown october : science publishing in was largely a boys’ club, but some areas were more open to women authors, such as nursing & science education. i look forward to maude muse’s textbook of psychology for nurses going public domain in days. #publicdomaindaycountdown #adalovelaceday october : happy birthday to poet e. e. cummings, born this day in . (while some of his poetry is lowercase he usually still capitalized his name when writing it out) his collection xli poems joins the public domain in days. #publicdomaindaycountdown october : it’s pg wodehouse’s birthday. in days more of his humorous stories join the us public domain, including sam in the suburbs. it originally ran as a serial in the saturday evening post in . all that year’s issues also join the public domain then. #publicdomaindaycountdown october : playwright and nobel laureate eugene o’neill was born today in . his “desire under the elms” entered the us public domain this year; in days, his plays “marco’s millions” and “the great god brown” will join it. #publicdomaindaycountdown october : not everything makes it to the end of the long road to the us public domain. in days, the copyright for the film man and maid (based on a book by elinor glyn) expires, but no known copies survive. maybe someone will find one? #publicdomaindaycountdown october : corra harris became famous for her novel a circuit rider’s wife and her world war i reporting. the work she considered her best, though, was as a woman thinks. it joins the public domain in days. #publicdomaindaycountdown october : edna st. vincent millay died years ago today. all her published work joins the public domain in days in many places outside the us. here, magazine work like “sonnet to gath” (in sep vanity fair) will join, but renewed post-’ work stays in ©. #publicdomaindaycountdown october : all songs eventually reach the public domain. authors can put them there themselves, like tom lehrer just did for his lyrics. but other humorous songs arrive by the slow route, like tilzer, terker, & heagney’s “pardon me (while i laugh)” will in days. #publicdomaindaycountdown october : sherwood anderson’s winesburg, ohio wasn’t a best-seller when it came out, but his dark laughter was. since joycean works fell out of fashion, that book’s been largely forgotten, but may get new attention when it joins the public domain in days. #publicdomaindaycountdown october : artist nc wyeth was born this day in . the brandywine museum near philadelphia shows many of his works. his illustrated edition of francis parkman’s book the oregon trail joins the public domain in days. #publicdomaindaycountdown october : today (especially at : , on / ) many chemists celebrate #moleday. in days, they’ll also get to celebrate historically important chemistry publications joining the us public domain, including all issues of justus liebigs annalen der chemie. #publicdomaindaycountdown october : while some early alfred hitchcock films were in the us public domain for a while due to formality issues, the gatt accords restored their copyrights. his directorial debut, the pleasure garden, rejoins the public domain (this time for good) in days. #publicdomaindaycountdown (addendum: there may still be one more year of copyright to this film as of ; see the comments to this post for details.) october : albert barnes took a different approach to art than most of his contemporaries. the first edition of the art in painting, where he explains his theories and shows examples from his collection, joins the public domain in days. #publicdomaindaycountdown october : prolific writer carolyn wells had a long-running series of mystery novels featuring fleming stone. here’s a blog post by the passing tramp on one of them, the daughter of the house, which will join the public domain in days. #publicdomaindaycountdown october : theodore roosevelt was born today in , and died over years ago, but some of his works are still copyrighted. in days, volumes of his correspondence with henry cabot lodge, written from - and published in , join the public domain. #publicdomaindaycountdown october : american composer and conductor howard hanson was born on this day in . his choral piece “lament for beowulf” joins the public domain in days. #publicdomaindaycountdown october : “skitter cat” was a white persian cat who had adventures in several children’s books by eleanor youmans, illustrated by ruth bennett. the first of the books joins the public domain in days. #publicdomaindaycountdown #nationalcatday october : “secret service smith” was a detective created by canadian author r. t. m. maitland. his first magazine appearance was in ; his first original full-length novel, the black magician, joins the public domain in weeks. #publicdomaindaycountdown october : poet john keats was born this day in . amy lowell’s -volume biography links his romantic poetry with her imagist poetry. ( review.) she finished and published it just before she died. it joins the public domain in days. #publicdomaindaycountdown november : “not just for an hour, not for just a day, not for just a year, but always.” irving berlin gave the rights to this song to his bride in . both are gone now, and in months it will join the public domain for all of us, always. #publicdomaindaycountdown november : mikhail fokine’s the dying swan dance, set to music by camille saint-saëns, premiered in , but its choreography wasn’t published until , the same year a film of it was released. it joins the public domain in days. #publicdomaindaycountdown (choreography copyright is weird. not only does the term not start until publication, which can be long after st performance, but what’s copyrightable has also changed. before it had to qualify as dramatic; now it doesn’t, but it has to be more than a short step sequence.) november : herbert hoover was the only sitting president to be voted out of office between & . before taking office, he wrote the foreword to carolyn crane’s everyman’s house, part of a homeowners’ campaign he co-led. it goes out of copyright in days. #publicdomaindaycountdown november : “the golden cocoon” is a silent melodrama featuring an election, jilted lovers, and extortion. the ruth cross novel it’s based on went public domain this year. the film will join it there in days. #publicdomaindaycountdown november : investigative journalist ida tarbell was born today in . her history of standard oil helped break up that trust in , but her life of elbert h. gary wrote more admiringly of his chairmanship of us steel. it joins the public domain in days. #publicdomaindaycountdown november : harold ross was born on this day in . he was the first editor of the new yorker, which he established in coöperation with his wife, jane grant. after ninety-five years, the magazine’s first issues are set to join the public domain in fifty-six days. #publicdomaindaycountdown november : “sweet georgia brown” by ben bernie & maceo pinkard (lyrics by kenneth casey) is a jazz standard, the theme tune of the harlem globetrotters, and a song often played in celebration. one thing we can celebrate in days is it joining the public domain. #publicdomaindaycountdown november : today i hiked on the appalachian trail. it was completed in , but parts are much older. walter collins o’kane’s trails and summits of the white mountains, published in when the at was more idea than reality, goes public domain in days. #publicdomaindaycountdown november : in sinclair lewis’ arrowsmith, a brilliant medical researcher deals with personal and ethical issues as he tries to find a cure for a deadly epidemic. the novel has stayed relevant well past its publication, and joins the public domain in days. #publicdomaindaycountdown november : john marquand was born today in . he’s known for his spy stories and satires, but an early novel, the black cargo, features a sailor curious about a mysterious payload on a ship he’s been hired onto. it joins the us public domain in days. #publicdomaindaycountdown november : the first world war, whose armistice was years ago today, cast a long shadow. among the many literary works looking back to it is ford madox ford’s novel no more parades, part of his “parade’s end” tetralogy. it joins the public domain in days. #publicdomaindaycountdown november : anne parrish was born on this day in . in , the dream coach, co-written with her brother, got a newbery honor , and her novel the perennial bachelor was a best-seller. the latter book joins the public domain in days. #publicdomaindaycountdown november : in “the curse of the golden cross”, g. k. chesterton’s father brown once again finds a natural explanation to what seem to be preternatural symbols & events. as of today, friday the th, the story is exactly weeks away from the us public domain. #publicdomaindaycountdown november : the pop standard “yes sir, that’s my baby” was the baby of walter donaldson (music) and gus kahn (lyrics). it’s been performed by many artists since its composition, and in days, this baby steps out into the public domain. #publicdomaindaycountdown november : marianne moore, born on this day in , had a long literary career, including editing the influential modernist magazine the dial from on. in days, all issues of that magazine will be fully in the public domain. #publicdomaindaycountdown november : george s. kaufman, born today in , wrote or directed a play in every broadway season from till . in days, several of his plays join the public domain, including his still-performed comedy “the butter and egg man”. #publicdomaindaycountdown november : shen of the sea was a newbery-winning collection of stories presented as “chinese” folktales, but written by american author arthur bowie chrisman. praised when first published, seen more as appropriation later, it’ll be appropriable itself in days. #publicdomaindaycountdown november : jacques maritain was a french catholic philosopher who influenced the universal declaration of human rights. his book on reformers (luther, descartes, and rousseau) joins the public domain in days. #publicdomaindaycountdown november : prevailing views of history change a lot over years. the pulitzer history prize went to a book titled “the war for southern independence”. the last volume of edward channing’s history of the united states, it joins the public domain in days. #publicdomaindaycountdown november : alfred north whitehead’s science and the modern world includes a nuanced discussion of science and religion differing notably from many of his contemporaries’. (a recent review of it.) it joins the us public domain in weeks. november : algonquin round table member robert benchley tried reporting, practical writing, & reviews, but soon found that humorous essays & stories were his forte. one early collection, pluck and luck, joins the public domain in days. #publicdomaindaycountdown november : i’ve often heard people coming across a piano sit down & pick out hoagy carmichael’s “heart and soul”. he also had other hits, one being “washboard blues“. his original piano instrumental version becomes public domain in days. #publicdomaindaycountdown november : harpo marx, the marx brothers mime, was born today in . in his oldest surviving film, “too many kisses” he does “speak”, but silently (like everyone else in it), without his brothers. it joins the public domain in days. #publicdomaindaycountdown november : in the man nobody knows, bruce barton likened the world of jesus to the world of business. did he bring scriptural insight to management, or subordinate christianity to capitalism? it’ll be easier to say, & show, after it goes public domain in days. #publicdomaindaycountdown november : before virgil thomson (born today in ) was well-known as a composer, he wrote a music column for vanity fair. his first columns, and the rest of vanity fair for , join the public domain in days. #publicdomaindaycountdown november : “each moment that we’re apart / you’re never out of my heart / i’d rather be lonely and wait for you only / oh how i miss you tonight” those staying safe by staying apart this holiday might appreciate this song, which joins the public domain in days. #publicdomaindaycountdown (the song, “oh, how i miss you tonight” is by benny davis, joe burke, and mark fisher, was published in , and performed and recorded by many musicians since then, some of whom are mentioned in this wikipedia article.) november : feminist author katharine anthony, born today in , was best known for her biographies. her biography of catherine the great, which drew extensively on the empress’s private memoirs, joins the public domain in days. #publicdomaindaycountdown november : tonight in “barn dance” (soon renamed “grand ole opry”) debuted in nashville. most country music on it & similar shows then were old favorites, but there were new hits too, like “the death of floyd collins”, which joins the public domain in days. #publicdomaindaycountdown (the song, with words by andrew jenkins and music by john carson, was in the line of other disaster ballads that were popular in the s. this particular disaster had occurred earlier in the year, and became the subject of song, story, drama, and film.) november : as many folks get ready for christmas, many christmas-themed works are also almost ready to join the public domain in days. one is the holly hedge, and other christmas stories by temple bailey. more on the book & author. #publicdomaindaycountdown november : in john maynard keynes published the economic consequences of sterling parity objecting to winston churchill returning the uk to the gold standard. that policy ended in ; the book’s us copyright lasted longer, but will finally end in days. #publicdomaindaycountdown december : du bose heyward’s novel porgy has a distinguished legacy of adaptations, including a broadway play, and gershwin’s opera “porgy and bess”. when the book joins the public domain a month from now, further adaptation possibilities are limitless. #publicdomaindaycountdown december : in dorothy black’s romance — the loveliest thing a young englishwoman “inherits a small sum of money, buys a motor car and goes off in search of adventure and romance”. first serialized in ladies’ home journal, it joins the public domain in days. #publicdomaindaycountdown december : joseph conrad was born on this day in , and died in , leaving unfinished his napoleonic novel suspense. but it was still far enough along to get serialized in magazines and published as a book in , and it joins the public domain in days. #publicdomaindaycountdown december : ernest hemingway’s first us-published story collection in our time introduced his distinctive style to an american audience that came to view his books as classics of th century fiction: it joins the public domain in days. #publicdomaindaycountdown december : libertarian author rose wilder lane helped bring her mother’s “little house” fictionalized memoirs into print. before that, she published biographical fiction based on the life of jack london, called he was a man. it joins the public domain in days. #publicdomaindaycountdown december : indiana naturalist and author gene stratton-porter died on this day in . her final novel, the keeper of the bees, was published the following year, and joins the public domain in days. one review. #publicdomaindaycountdown december : willa cather was born today in . her novel the professor’s house depicts s cultural dislocation from a different angle than f. scott fitzgerald’s better-known great gatsby. it too joins the public domain in days. #publicdomaindaycountdown december : the last symphony published by finnish composer jean sibelius (born on this day in ) is described in the grove dictionary as his “most remarkable compositional achievement”. it joins the public domain in the us in days. #publicdomaindaycountdown december : when the habsburg empire falls, what comes next for the people & powers of vienna? the novel old wine, by phyllis bottome (wife of the local british intelligence head) depicts a society undergoing rapid change. it joins the us public domain in days. #publicdomaindaycountdown december : lewis browne was “a world traveler, author, rabbi, former rabbi, lecturer, socialist and friend of the literary elite”. his first book, stranger than fiction: a short history of the jews, joins the public domain in days. #publicdomaindaycountdown december : in , john scopes was convicted for teaching evolution in tennessee. books explaining the science to lay audiences were popular that year, including henshaw ward’s evolution for john doe. it becomes public domain in weeks. #publicdomaindaycountdown december : philadelphia artist jean leon gerome ferris was best known for his “pageant of a nation” paintings. three of them, “the birth of pennsylvania”, “gettysburg, ”, and “the mayflower compact”, join the public domain in days. #publicdomaindaycountdown december : the queen of cooks, and some kings was a memoir of london hotelier rosa lewis, as told to mary lawton. her life story was the basis for the bbc and pbs series “the duchess of duke street”. it joins the public domain in days. #publicdomaindaycountdown december : today we’re celebrating new films being added to the national film registry. in days, we can also celebrate more registry films joining the public domain. one is the clash of the wolves, starring rin tin tin. #publicdomaindaycountdown december : etsu inagaki sugimoto, daughter of a high-ranking japanese official, moved to the us in an arranged marriage after her family fell on hard times. her memoir, a daughter of the samurai, joins the public domain in days. #publicdomaindaycountdown december : on the trail of negro folk-songs compiled by dorothy scarborough assisted by ola lee gulledge, has over songs. scarborough’s next of kin (not gulledge, or any of their sources) renewed its copyright in . but in days, it’ll be free for all. #publicdomaindaycountdown december : virginia woolf’s writings have been slowly entering the public domain in the us. we’ve had the first part of her mrs. dalloway for a while. the complete novel, and her first common reader essay collection, join it in days. #publicdomaindaycountdown december : lovers in quarantine with harrison ford sounds like a movie made for , but it’s actually a silent comedy (with a different harrison ford). it’ll be ready to go out into the public domain after a -day quarantine. #publicdomaindaycountdown december : ma rainey wrote, sang, and recorded many blues songs in a multi-decade career. two of her songs becoming public domain in days are “shave ’em dry” (written with william jackson) & “army camp harmony blues” (with hooks tilford). #publicdomaindaycountdown december : for years we’ve celebrated the works of prize-winning novelist edith wharton as her stories join the public domain. in days, the writing of fiction, her book on how she writes her memorable tales, will join that company. #publicdomaindaycountdown december : albert payson terhune, born today in , raised and wrote about dogs he kept at what’s now a public park in new jersey. his book about wolf, who died heroically and is buried there, will also be in the public domain in days. #publicdomaindaycountdown december : in the s it seemed buster keaton could do anything involving movies. go west, a feature film that he co-wrote, directed, co-produced, and starred in, is still enjoyed today, and it joins the public domain in days. #publicdomaindaycountdown december : in days, not only will theodore dreiser’s massive novel an american tragedy be in the public domain, but so will a lot of the raw material that went into it. much of it is in @upennlib‘s special collections. #publicdomaindaycountdown december : johnny gruelle, born today in , created the raggedy ann doll, and a series of books sold with it that went under many christmas trees. two of them, raggedy ann’s alphabet book and raggedy ann’s wishing pebble, join the public domain in days. #publicdomaindaycountdown december : written in hebrew by joseph klausner, translated into english by anglican priest herbert danby, jesus of nazareth reviewed jesus’s life and teachings from a jewish perspective. it made a stir when published in , & joins the public domain in days. #publicdomaindaycountdown december : “it’s a travesty that this wonderful, hilarious, insightful book lives under the inconceivably large shadow cast by the great gatsby.” a review of anita loos’s gentlemen prefer blondes, also joining the public domain in days. #publicdomaindaycountdown december : “on revisiting manhattan transfer, i came away with an appreciation not just for the breadth of its ambition, but also for the genius of its representation.” a review of the john dos passos novel becoming public domain in days. #publicdomaindaycountdown december : all too often legal systems and bureaucracies can be described as “kafkaesque”. the kafka work most known for that sense of arbitrariness and doom is der prozess (the trial), reviewed here. it joins the public domain in days. #publicdomaindaycountdown december : chocolate kiddies, an african american music and dance revue that toured europe in , featured songs by duke ellington and jo trent including “jig walk”, “jim dandy”, and “with you”. they join the public domain in days. #publicdomaindaycountdown december : lon chaney starred in of the top-grossing movies of . the phantom of the opera has long been in the public domain due to copyright nonrenewal. the unholy three, which was renewed, joins it in the public domain in days. #publicdomaindaycountdown (if you’re wondering why some of the other big film hits of haven’t been in this countdown, in many cases it’s also because their copyrights weren’t renewed. or they weren’t actually copyrighted in .) december : “…you might as well live.” dorothy parker published “resumé” in , and ultimately outlived most of her algonquin round table-mates. this poem, and her other writing for periodicals, will be in the public domain tomorrow. #publicdomaindaycountdown share this: email print twitter facebook reddit like this: like loading... related about john mark ockerbloom i'm a digital library strategist at the university of pennsylvania, in philadelphia. view all posts by john mark ockerbloom → this entry was posted in copyright, publicdomain. bookmark the permalink. ← from our subjects to yours (and vice versa) public domain day : honoring a lost generation → responses to counting down to in the public domain brent reid says: january , at : am a great, useful list but it’s one year premature regarding alfred hitchcock’s the pleasure garden: https://www.brentonfilm.com/articles/alfred-hitchcock-collectors-guide-the-pleasure-garden- reply john mark ockerbloom says: january , at : pm thanks for your comment! determining the date of restored copyrights like this one can be tricky, since there’s often no registration that unambiguously gives a date for the start of the copyright term. if i recall correctly, my basis for assuming a start date (and therefore a end date) was a release of the film in germany stated at places like imdb. the article linked above disputes this release date, with the author citing a lack of contemporary evidence of general public release (either in germany or elsewhere) prior to . if the film’s us copyright term does start in , as that would suggest, the film would indeed have one more year of copyright, and enter the public domain in . folks wanting to use this film before then should check the facts carefully before assuming its public domain, and i’ll add a note to my post warning about this. thanks again! reply brent reid says: january , at : pm you’re welcome but you can’t be expected to know every exact release or publication date, though it does highlight the danger of assuming they match the date of filming or completion. also as per the article, it’s perhaps worth providing a little context as to what “public domain” actually entails, i.e. which copies can be used and which cannot. leave a reply cancel reply enter your comment here... fill in your details below or click an icon to log in: email (required) (address never made public) name (required) website you are commenting using your wordpress.com account. ( log out /  change ) you are commenting using your google account. ( log out /  change ) you are commenting using your twitter account. ( log out /  change ) you are commenting using your facebook account. ( log out /  change ) cancel connecting to %s notify me of new comments via email. notify me of new posts via email. search for: rss feed pages about free decimal correspondence ils services for discovery applications john mark ockerbloom the metadata challenge recent posts public domain day : honoring a lost generation counting down to in the public domain from our subjects to yours (and vice versa) everybody’s library questions: finding films in the public domain build a better registry: my intended comments to the library of congress on the next register of copyrights recent comments jason on public domain day : honoring a lost generation john mark ockerbloom on public domain day : honoring a lost generation norma bruce on public domain day : honoring a lost generation brent reid on counting down to in the public domain john mark ockerbloom on counting down to in the public domain archives january december march january december november october september july june january december october june january december september january october september july may january january june january october august april march february january december july may january october september june may april january december november october september august july june may april march february january december october september august july june may april march january december november october september august july june may april march february january december november access for all open access news copyrights and wrongs copyfight copyright & fair use freedom to tinker lawrence lessig general library-related news and comment lisnews teleread interesting folks jessamyn west john scalzi jonathan rochkind k. g. schneider karen coyle lawrence lessig leslie johnston library loon lorcan dempsey paul courant peter brantley walt crawford metadata and friends planet cataloging shiny tech boing boing o’reilly radar planet code lib tales from the repository repositoryman writing and publishing if:book making light publishing frontier everybody's libraries blog at wordpress.com. send to email address your name your email address cancel post was not sent - check your email addresses! email check failed, please try again sorry, your blog cannot share posts by email. %d bloggers like this: erin white – library technology, ux, the web, bikes, #rva erin white library technology, ux, the web, bikes, #rva skip to content erinrwhite in libraries, richmond | april , talk: using light from the dumpster fire to illuminate a more just digital world this february i gave a lightning talk for the richmond design group. my question: what if we use the light from the dumpster fire of to see an equitable, just digital world? how can we change our thinking to build the future web we need? presentation is embedded here; text of talk is below. hi everybody, i’m erin. before i get started i want to say thank you to the rva design group organizers. this is hard work and some folks have been doing it for years. thank you to the organizers of this group for doing this work and for inviting me to speak. this talk isn’t about . this talk is about the future. but to understand the future, we gotta look back. the web in travel with me to . twenty-five years ago! i want to transport us back to the mindset of the early web. the fundamental idea of hyperlinks, which we now take for granted, really twisted everyone’s noodles. so much of the promise of the early web was that with broad access to publish in hypertext, the opportunities were limitless. technologists saw the web as an equalizing space where systems of oppression that exist in the real world wouldn’t matter, and that we’d all be equal and free from prejudice. nice idea, right? you don’t need to’ve been around since to know that’s just not the way things have gone down. pictured before you are some of the early web pioneers. notice a pattern here? these early visions of the web, including barlow’s declaration of independence of cyberspace, while inspiring and exciting, were crafted by the same types of folks who wrote the actual declaration of independence: the landed gentry, white men with privilege. their vision for the web echoed the declaration of independence’s authors’ attempts to describe the world they envisioned. and what followed was the inevitable conflict with reality. we all now hold these truths to be self-evident: the systems humans build reflect humans’ biases and prejudices. we continue to struggle to diversify the technology industry. knowledge is interest-driven. inequality exists, online and off. celebrating, rather than diminishing, folks’ intersecting identities is vital to human flourishing. the web we have known profit first: monetization, ads, the funnel, dark patterns can we?: innovation for innovation’s sake solutionism: code will save us visual design: aesthetics over usability lone genius: “hard” skills and rock star coders short term thinking: move fast, break stuff shipping: new features, forsaking infrastructure let’s move forward quickly through the past years or so of the web, of digital design. all of the web we know today has been shaped in some way by intersecting matrices of domination: colonialism, capitalism, white supremacy, patriarchy. (thank you, bell hooks.) the digital worlds where we spend our time – and that we build!! – exist in this way. this is not an indictment of anyone’s individual work, so please don’t take it personally. what i’m talking about here is the digital milieu where we live our lives. the funnel drives everything. folks who work in nonprofits and public entities often tie ourselves in knots to retrofit our use cases in order to use common web tools (google analytics, anyone?) in chasing innovation™ we often overlook important infrastructure work, and devalue work — like web accessibility, truly user-centered design, care work, documentation, customer support and even care for ourselves and our teams — that doesn’t drive the bottom line. we frequently write checks for our future selves to cash, knowing damn well that we’ll keep burying ourselves in technical debt. that’s some tough stuff for us to carry with us every day. the “move fast” mentality has resulted in explosive growth, but at what cost? and in creating urgency where it doesn’t need to exist, focusing on new things rather than repair, the end result is that we’re building a house of cards. and we’re exhausted. to zoom way out, this is another manifestation of late capitalism. emphasis on late. because… happened. what taught us hard times amplify existing inequalities cutting corners mortgages our future infrastructure is essential “colorblind”/color-evasive policy doesn’t cut it inclusive design is vital we have a duty to each other technology is only one piece together, we rise the past year has been awful for pretty much everybody. but what the light from this dumpster fire has illuminated is that things have actually been awful for a lot of people, for a long time. this year has shown us how perilous it is to avoid important infrastructure work and to pursue innovation over access. it’s also shown us that what is sometimes referred to as colorblindness — i use the term color-evasiveness because it is not ableist and it is more accurate — a color-evasive approach that assumes everyone’s needs are the same in fact leaves people out, especially folks who need the most support. we’ve learned that technology is a crucial tool and that it’s just one thing that keeps us connected to each other as humans. finally, we’ve learned that if we work together we can actually make shit happen, despite a world that tells us individual action is meaningless. like biscuits in a pan, when we connect, we rise together. marginalized folks have been saying this shit for years. more of us than ever see these things now. and now we can’t, and shouldn’t, unsee it. the web we can build together current state: – profit first – can we? – solutionism – aesthetics – “hard” skills – rockstar coders – short term thinking – shipping future state: – people first: security, privacy, inclusion – should we? – holistic design – accessibility – soft skills – teams – long term thinking – sustaining so let’s talk about the future. i told you this would be a talk about the future. like many of y’all i have had a very hard time this year thinking about the future at all. it’s hard to make plans. it’s hard to know what the next few weeks, months, years will look like. and who will be there to see it with us. but sometimes, when i can think clearly about something besides just making it through every day, i wonder. what does a people-first digital world look like? who’s been missing this whole time? just because we can do something, does it mean we should? will technology actually solve this problem? are we even defining the problem correctly? what does it mean to design knowing that even “able-bodied” folks are only temporarily so? and that our products need to be used, by humans, in various contexts and emotional states? (there are also false binaries here: aesthetics vs. accessibility; abled and disabled; binaries are dangerous!) how can we nourish our collaborations with each other, with our teams, with our users? and focus on the wisdom of the folks in the room rather than assigning individuals as heroes? how can we build for maintenance and repair? how do we stop writing checks our future selves to cash – with interest? some of this here, i am speaking of as a web user and a web creator. i’ve only ever worked in the public sector. when i talk with folks working in the private sector i always do some amount of translating. at the end of the day, we’re solving many of the same problems. but what can private-sector workers learn from folks who come from a public-sector organization? and, as we think about what we build online, how can we also apply that thinking to our real-life communities? what is our role in shaping the public conversation around the use of technologies? i offer a few ideas here, but don’t want them to limit your thinking. consider the public sector here’s a thread about public service. ⚖️🏛️ 💪🏼💻🇺🇸 — dana chisnell (she / her) (@danachis) february , i don’t have a ton of time left today. i wanted to talk about public service like the very excellent dana chisnell here. like i said, i’ve worked in the public sector, in higher ed, for a long time. it’s my bread and butter. it’s weird, it’s hard, it’s great. there’s a lot of work to be done, and it ain’t happening at civic hackathons or from external contractors. the call needs to come from inside the house. working in the public sector government should be – inclusive of all people – responsive to needs of the people – effective in its duties & purpose — dana chisnell (she / her) (@danachis) february , i want you to consider for a minute how many folks are working in the public sector right now, and how technical expertise — especially in-house expertise — is something that is desperately needed. pictured here are the old website and new website for the city of richmond. i have a whole ‘nother talk about that new richmond website. i foia’d the contracts for this website. there are accessibility errors on the homepage alone. it’s been in development for years and still isn’t in full production. bottom line, good government work matters, and it’s hard to find. important work is put out for the lowest bidder and often external agencies don’t get it right. what would it look like to have that expertise in-house? influencing technology policy we also desperately need lawmakers and citizens who understand technology and ask important questions about ethics and human impact of systems decisions. pictured here are some headlines as well as a contract from the city of richmond. y’all know we spent $ . million on a predictive policing system that will disproportionately harm citizens of color? and that earlier this month, city council voted to allow richmond and vcu pd’s to start sharing their data in that system? the surveillance state abides. technology facilitates. i dare say these technologies are designed to bank on the fact that lawmakers don’t know what they’re looking at. my theory is, in addition to holding deep prejudices, lawmakers are also deeply baffled by technology. the hard questions aren’t being asked, or they’re coming too late, and they’re coming from citizens who have to put themselves in harm’s way to do so. technophobia is another harmful element that’s emerged in the past decades. what would a world look like where technology is not a thing to shrug off as un-understandable, but is instead deftly co-designed to meet our needs, rather than licensed to our city for . million dollars? what if everyone knew that technology is not neutral? closing this is some of the future i can see. i hope that it’s sparked new thoughts for you. let’s envision a future together. what has the light illuminated for you? thank you! april , this car runs: love letter to a honda accord three years ago i sold my honda accord dx. here’s the craigslist ad love letter i wrote to it. honda accord dx – dr, automatic – this car runs. – $ (richmond, va) honda accord dx door cylinders , miles color: “eucalyptus green pearl” aka the color and year that […] in life, richmond | april , podcast interview: names, binaries and trans-affirming systems on legacy code rocks! in february i was honored to be invited to join scott ford on his podcast legacy code rocks!. i’m embedding the audio below. view the full episode transcript — thanks to trans-owned deep south transcription services! i’ve pulled out some of the topics we discussed and heavily edited/rearranged them for clarity. names in systems legal […] in libraries | march , post navigation ← older posts categories categoriesselect category bikes conferences libraries life projects richmond archives archives select month april march may march march february september august january december september august may april march march february contact e-mail me follow @erinrwhite independent publisher empowered by wordpress none warnings in app boot · issue # · reidmorrison/rails_semantic_logger · github skip to content sign up why github? features → mobile → actions → codespaces → packages → security → code review → issues → integrations → github sponsors → customer stories→ team enterprise explore explore github → learn and contribute topics → collections → trending → learning lab → open source guides → connect with others the readme project → events → community forum → github education → github stars program → marketplace pricing plans → compare plans → contact sales → education → in this repository all github ↵ jump to ↵ no suggested jump to results in this repository all github ↵ jump to ↵ in this user all github ↵ jump to ↵ in this repository all github ↵ jump to ↵ sign in sign up {{ message }} reidmorrison / rails_semantic_logger notifications star fork code issues pull requests actions projects wiki security insights more code issues pull requests actions projects wiki security insights new issue have a question about this project? sign up for a free github account to open an issue and contact its maintainers and the community. pick a username email address password sign up for github by clicking “sign up for github”, you agree to our terms of service and privacy statement. we’ll occasionally send you account related emails. already on github? sign in to your account jump to bottom warnings in app boot # closed kadru opened this issue feb , · comments closed warnings in app boot # kadru opened this issue feb , · comments labels awaiting feedback comments copy link kadru commented feb , environment ruby . . rails . . . semantic logger . . rails semantic logger . . puma . . config/application.rb stdout.sync = true config.rails_semantic_logger.processing = false config.rails_semantic_logger.add_file_appender = false config.rails_semantic_logger.format = :json config.semantic_logger.add_appender( io: stdout, level: config.log_level, formatter: config.rails_semantic_logger.format) config/puma.rb on_worker_boot do # re-open appenders after forking the process semanticlogger.reopen end expected behavior no warnings when is in use with rails and puma. actual behavior warnings: - - t : : . + : app[web. ]: [ ] * listening on http:// . . . : - - t : : . + : app[web. ]: [ ] ! warning: detected thread(s) started in app boot: - - t : : . + : app[web. ]: [ ] ! # - /app/vendor/bundle/ruby/ . . /gems/semantic_logger- . . /lib/semantic_logger/appender/async.rb: :in `pop' - - t : : . + : app[web. ]: [ ] ! # - /app/vendor/bundle/ruby/ . . /gems/activerecord- . . . /lib/active_record/connection_adapters/abstract/connection_pool.rb: :in `sleep' the text was updated successfully, but these errors were encountered: 👍 we are unable to convert the task to an issue at this time. please try again. the issue was successfully created but we are unable to update the comment at this time. copy link owner reidmorrison commented may , we run puma in single mode, so you are on your own with this one. the semantic logger thread being created is by design, without it no logging will be performed. suggest looking into what is generating that warning log message, since it is not from semantic logger. copy link owner reidmorrison commented jun , i booted our application in clustered mode without any issues. below is the log output: % bin/rails s => booting rails => booting puma => rails . . . application starting in development => run `bin/rails server --help` for more startup options ... [ ] puma starting in cluster mode... [ ] * puma version: . . (ruby . . -p ) ("sweetnighter") [ ] * min threads: [ ] * max threads: [ ] * environment: development [ ] * master pid: [ ] * workers: [ ] * restarts: (✔) hot (✔) phased [ ] * listening on http:// . . . : [ ] * listening on http://[:: ]: [ ] use ctrl-c to stop [ ] - worker (pid: ) booted in . s, phase: [ ] - worker (pid: ) booted in . s, phase: - - : : . d [ :puma threadpool logger.rb: ] rack -- started -- { method: "get", path: "/", ip: " . . . " } ... as per the docs, just had to add semanticlogger.reopen to the on_worker_boot to get logging working. these are the lines i added / uncommented in config/puma.yml workers env.fetch("web_concurrency") { } before_fork do applicationrecord.connection_pool.disconnect! if defined?(activerecord) end on_worker_boot do applicationrecord.establish_connection if defined?(activerecord) # re-open appenders after forking the process semanticlogger.reopen end reidmorrison added the awaiting feedback label jun , reidmorrison closed this jun , sign up for free to join this conversation on github. already have an account? sign in to comment assignees no one assigned labels awaiting feedback projects none yet milestone no milestone linked pull requests successfully merging a pull request may close this issue. none yet participants © github, inc. terms privacy security status docs contact github pricing api training blog about you can’t perform that action at this time. you signed in with another tab or window. reload to refresh your session. you signed out in another tab or window. reload to refresh your session. connect online: cfp for presentations and panels connect online: cfp for presentations and panels the program committee for samvera connect online invites proposals for presentations and panels to be given on monday, october th - friday, october nd, between . am and . pm us edt. presentations should be up to minutes long with an additional minutes available for questions. panels can be or minutes long and should include time for audience questions. this cfp is open until august st. * required email * your email name * your answer names of other presenters/panelists whilst it will be helpful to the program committee to know the names of any additional presenters at this stage, you can provide or change these names later. your answer is this a proposal for a presentation panel? * presentation ( minutes) panel ( minutes) panel ( minutes) title of session * your answer who is/are your target audience? * administrators developers managers (specifically repository managers) managers (more generally) metadata specialists new or potential community participants sysadmins/devops ui/ux practitioners other: required please give us a description of the session that we can use in the conference program * a few sentences - no more than words or so. your answer please indicate the date(s) you are available to present: * monday th october tuesday th october wednesday th october thursday st october friday nd october required please feel free to add any additional information you wish the program committee to take into consideration. your answer send me a copy of my responses. submit never submit passwords through google forms. recaptchaprivacyterms this form was created inside of samvera community. report abuse  forms     meta interchange – libraries, computing, metadata, and more skip to content meta interchange libraries, computing, metadata, and more search for submit primary menu about comment policy privacy policy search for submit trading for images posted: february categories: libraries, patron privacy let’s search a koha catalog for something that isn’t at all controversial: what you search for in a library catalog ought to be only between you and the library — and that, only briefly, as the library should quickly forget. of course, between “ought” and “is” lies the devil and his details. let’s poke around with chrome’s devtools: hit control-shift-i (on windows) switch to the network tab. hit control-r to reload the page and get a list of the http requests that the browser makes. we get something like this: there’s a lot to like here: every request was made using https rather than http, and almost all of the requests were made to the koha server. (if you can’t trust the library catalog, who can you trust? well… that doesn’t have an answer as clear as we would like, but i won’t tackle that question here.) however, the two cover images on the result’s page come from amazon: https://images-na.ssl-images-amazon.com/images/p/ . .tzzzzzzz.jpg https://images-na.ssl-images-amazon.com/images/p/ . .tzzzzzzz.jpg what did i trade in exchange for those two cover images? let’s click on the request on and see: :authority: images-na.ssl-images-amazon.com :method: get :path: /images/p/ . .tzzzzzzz.jpg :scheme: https accept: image/webp,image/apng,image/,/*;q= . accept-encoding: gzip, deflate, br accept-language: en-us,en;q= . cache-control: no-cache dnt: pragma: no-cache referer: https://catalog.libraryguardians.com/cgi-bin/koha/opac-search.pl?q=anarchist sec-fetch-dest: image sec-fetch-mode: no-cors sec-fetch-site: cross-site user-agent: mozilla/ . (windows nt . ; win ; x ) applewebkit/ . (khtml, like gecko) chrome/ . . . safari/ . here’s what was sent when i used firefox: host: images-na.ssl-images-amazon.com user-agent: mozilla/ . (windows nt . ; win ; x ; rv: . ) gecko/ firefox/ . accept: image/webp,/ accept-language: en-us,en;q= . accept-encoding: gzip, deflate, br connection: keep-alive referer: https://catalog.libraryguardians.com/cgi-bin/koha/opac-search.pl?q=anarchist dnt: pragma: no-cache amazon also knows what my ip address is. with that, it doesn’t take much to figure out that i am in georgia and am clearly up to no good; after all, one look at the referer header tells all. let’s switch over to using google book’s cover images: https://books.google.com/books/content?id=phzfwaeacaaj&printsec=frontcover&img= &zoom= https://books.google.com/books/content?id=wdgrjqaacaaj&printsec=frontcover&img= &zoom= this time, the request headers are in chrome: :authority: books.google.com :method: get :path: /books/content?id=phzfwaeacaaj&printsec=frontcover&img= &zoom= :scheme: https accept: image/webp,image/apng,image/,/*;q= . accept-encoding: gzip, deflate, br accept-language: en-us,en;q= . cache-control: no-cache dnt: pragma: no-cache referer: https://catalog.libraryguardians.com/ sec-fetch-dest: image sec-fetch-mode: no-cors sec-fetch-site: cross-site user-agent: mozilla/ . (windows nt . ; win ; x ) applewebkit/ . (khtml, like gecko) chrome/ . . . safari/ . x-client-data: cko yqeiilbjaqimtskbcmg yqeiqz kaqi qsobcmuuygeiz /kaqi smobcje ygei bxkaqinusobgkukygeyvrrkaq== and in firefox: host: books.google.com user-agent: mozilla/ . (windows nt . ; win ; x ; rv: . ) gecko/ firefox/ . accept: image/webp,/ accept-language: en-us,en;q= . accept-encoding: gzip, deflate, br connection: keep-alive referer: https://catalog.libraryguardians.com/ dnt: pragma: no-cache cache-control: no-cache on the one hand… the referer now contains only the base url of the catalog. i believe this is due to a difference in how koha figures out the correct image url. when using amazon for cover images, the isbn of the title is normalized and used to construct a url for an tag. koha doesn’t currently set a referrer-policy, so the default of no-referrer-when-downgrade is used and the full referrer is sent. google book’s cover image urls cannot be directly constructed like that, so a bit of javascript queries a web service and gets back the image urls, and for reasons that are unclear to me at the moment, doesn’t send the full url as the referrer. (cover images from openlibrary are fetched in a similar way, but full referer header is sent.) as a side note, the x-client-data header sent by chrome to books.google.com is… concerning. there are some relatively simple things that can be done to limit leaking the full referring url to the likes of google and amazon, including setting the referrer-policy header via web server configuration or meta tag to something like origin or origin-when-cross-origin. setting referrerpolicy for